[fpc-pascal] Re: GetAffinity\SetAffinity
Mark Morgan Lloyd
markMLl.fpc-pascal at telemetry.co.uk
Thu Nov 21 10:10:43 CET 2013
Brian wrote:
> Mark,
>
> All the documentation seems to indicate that processes and threads are
> treated alike in Linux ,
Remember that historically, there were two different threading models:
LinuxThreads, where a thread was implemented as a process, and the newer
NPTL. Somewhere I've got code that can test for this: the difference is
detectable and different architectures made the transition with
significant time separation.
My understanding is that ownership of resources, in particular memory,
is still a property of processes rather than threads.
> however , even having established that I can
> apparently select a core to run a thread , I haven't yet been able to make
> it work.
>
> I explain the findings from a dual core Intel CPU.
>
> Using the sched_setaffinity() I can run a program on either core1 or core2
> as long as all threads are set to the same core.
>
> If the main process (program) is set to core2 and all the threads are set to
> core2 everything is fine and dandy.
>
> However if the main process (program) is set to Core1 and any thread is set
> to Core2 using pthread_setaffinity_np() , when the Core2 thread starts to
> run it generates an immediate GP fault.
If the API permits, try setting the affinity of the process to both
cores, and then adjusting the affinity of the individual threads within
this. Never try to move the affinity of a thread outside that of the
process that owns its resources.
> The intent is to be able to run a thread which polls a serial or Ethernet
> port , rather like one would do with an ISR , but using a separate core for
> the task , while the main program runs on a different core.
Unless you are going directly to the hardware, bear in mind that the
actual device access is going to be done by the kernel using interrupts
and buffers. So while I can sympathise with wanting to dedicate a core
for the "heavy lifting", do you /really/ have to poll the buffers with a
tight loop, rather than letting the OS do the job for which it was
designed? And are your timing requirements really so stringent that you
/have/ to do this?
> At this point I am not certain if there is something I am missing.
At this point I'd be digging into the kernel for my architecture of
choice and finding out what the restrictions are. Bearing in mind that
it's a long time since I've tinkered with this, but my understanding is
that much of it originated inside SGI, where their model was to have a
high-speed network between the "chipsets" of a large number of- in
effect- separate boxes (I've got /one/ SGI system here with that type of
port). With that sort of architecture, you'd want to try to stop
processes from moving between boxes gratuitously, and you might, as a
separate setting, want to pin individual threads to one processor in
each box. I've also got systems comprising multiple boards, where each
board has an identical collection of ports, and in that type of
architecture you'd want to try to keep a process near to any ports it
was servicing (i.e. with suitable interrupt affinity) even if traffic
between boards was comparatively efficient.
--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk
[Opinions above are the author's, not those of his employers or colleagues]
More information about the fpc-pascal
mailing list