Re: [netatalk-admins] Netatalk and OPI


Subject: Re: [netatalk-admins] Netatalk and OPI
From: Magnus Stenman (stone@hkust.se)
Date: Fri Jan 22 1999 - 06:26:07 EST


How about patching afpd and samba to notify (pass the inode or whatever)
a OPI daemon via a local only socket everytime a
directory has been modified?

If this is configurable per share/directory,
and doesn´t slow down normal operation, perhaps
it might even be a standard part of afpd/samba.

Possibly, there would have to be an intermediate
daemon to queue those notifications to avoid
afpd/samba slowing down too much (or multithread the OPI daemon).

/magnus

rodgerd@wnl.co.nz wrote:
>
> On Thu, Jan 21, 1999 at 09:24:11PM +0100, Hans-Guenter Weigand wrote:
>
> > > > Do you refer to an OPI specification 2.0 or the Helios OPI 2.0 product?
> > >
> > > OPI spec 2.0.
> >
> > URL available?
>
> It isn't available from their web site, but you should be able to find it at
> ftp://www.adobe.com/pub/adobe/devrelations/devtechnotes/pdffiles/opi_2.pdf
>
> > Uh, yes, I forgot about NFS and the PC world.
>
> Something many people would like tro be able to do 8).
>
> > Helios Ethershare seems to work like this: A file copied to the volume
> > from a Mac instantly is checked for type, and a layout file is generated
> > if appropriate. If you copy it there using NFS, you do not get a layout
> > file. Their afp-daemon notifies the opi-daemon, which forks and
> > downsamples the image file. If you bypass the afpd somehow, you have to
> > touch the file from a Mac to trigger the opid. This model offers prompt
> > service to Mac users, but obviously has some disadvantage.
>
> Quite. As I indicate, I think we can offer prompt service on a cross
> platform basis to everyone, which would be the ideal. Reading your
> description did get me thinking about optimal ways of downsampling though;
> previously, I'd just worked from the notion that one does a downsample from
> a big file. Thinking about how Helios could be hooking into their afpd
> analogue for more speed got me onto a slightly better (and should have been
> more obvious): checking the resource fork and the PostScript for embedded
> previews. That way, one could deliver low-res samples with pretty
> impressive speed; we'd only bog down on files that don't have a preview
> already.
>
> > Prepress people here are used to a different model (which I would
> > prefer),
>
> Configurability is everything 8).
>
> > which places the layout files into the same directory as the
> > hires files. "mypic.tiff" gets a little brother named "mypic.tiff.lay".
>
> That's straightforward enough, however...
>
> > An average path looks like this in the ufs:
> > "<sharepoint>/<customer>/1998/<jobnumber>/". All files go into the
> > <jobnumber> folder, possibly with some subfolders.
>
> Does this mean that you have the Helios OPI watching every job folder? In
> that case, I can can see the value in hacking the afpd analogue to talk to
> the sampler, because that would get a little more awkward without
> co-operating with it. Not impossible, but it would be harder to implement
> elegantly.
>
> > Once the job is done its folder is put on CD and/or other media and
> > deleted from the server volume. There's never enough disk space ;)
>
> Amen to that.
>
> > Can you give some impression of the time needed to find the new image
> > file in a, say, 100 GB ufs volume (with reduced number of inodes)? I'm
> > quite new to Unix programming and now little about wheels and gears
> > inside yet.
>
> Depends on whether you search the whole hierarchy or not; I also don't have
> a UFS volume handy. I'll get some times across some 10-24 GB ext2 fses over
> the weekend, and take it from there.
>
> > > many queues are available and how deep those queues are (do you allow one,
> > > two, three, or more items to be processed in a given queue).
> >
> > Isn't it sufficient to start a downsampling process with nice +10 for
> > every detected new file? The kernel would have to handle "queuing" then.
>
> That's one way; OTOH, one may want to specify some queues as more equal than
> others - you might have a queue set up for people who are scanning work a
> few days in advance (something we do here, begin a newspaper), where the
> ability to get everything sampled all at once, as quickly as possible, is
> lower priority.
>
> > > Anyways, all this talk inspired me to dig up the old code this morning. I
> > > took one look at it, threw most of it away, and am now happily banging away
> > > at FreePO[2] again.
> >
> > Fine! Let me know if and how I may help. I run OpenBSD-current and
> > Solaris (2.)7 here, and MacOS of course.
>
> Excellent. I have access to Linux, MacOS, Windows and AIX, although the
> bulk of the initial work will be on Linux.
>
> > Isn't there some p2c utility, which does the dirty work?
>
> That's a pascal to C translator. There's not actually that collosal a
> difference between Perl and C for a lot of work; moreover, for a lot of the
> work being done here - mucking about with PostScript streams - perl most
> likely handles the text involved better than any C I can write ever will.
> The filesystem stuff may be a different matter.
>
> I'll spend some time hacking over the weekend and make the results
> available early next week. We should probably take the discussion away from
> netatalk-admins, unless we start trying to integrate stuff into netatalk,
> although I don't have a listserv of my own.
>
> --
> Rodger Donaldson rodger.donaldson@wnl.co.nz
> Systems Support Direct line : 04 474 0560
> Wellington Newspapers Limited Fax : 04 474 0309
> You are in a maze of twisty little companies, all working against each other.



This archive was generated by hypermail 2b28 : Sat Dec 18 1999 - 16:16:14 EST