-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
p4p.client.thread.Context.put does not always work as expected when used inside a handler #164
Comments
Be careful. This will create an entirely new PVA client instance for each remote PUT. A substantial overhead considering the target PV is local, so there is no reason to go out over the network at all! The onPut handler for one SharedPV is allowed to from p4p.server import Server
from p4p.server.thread import SharedPV
from p4p.nt import NTScalar
pv1 = SharedPV(nt=NTScalar('i'), initial=0.0)
pv2 = SharedPV(nt=NTScalar('i'), initial=0.0)
@pv2.put
def handle(thispv, op):
thispv.post(op.value()) # aka. pv2.post(...)
pv1.post(op.value()*2)
op.done()
Server.forever(providers=[{
"DEV:RW:DOUBLE1": pv1,
"DEV:RW:DOUBLE2": pv2,
}]) |
Thanks for the suggestion, I didn't know you could call the post method for a different pv in the handler. This will fix the problem I'm having but now I need a way to pass into the handler the SharedPV object for PVs that exist on the same server. The PVs are created independently so I'd need to find out which PVs are on the same server after the server has started, since it's only then that all PVs will have been defined. That was part of the reason for using the |
My goal for the APIs of P4P, and PVXS underneath, is fully re-entrant. Further P4P object should interact with the python cyclic garbage collector. So you should be able reasonably to tie these handler objects up into all kinds of loops.
I think this is more of a general python question. One approach would be to use bound class members for the handler callbacks, then use |
I'm trying to replicate similar functionality to how a forward link works in a traditional IOC using
p4p.client.thread.Context.put
in the handler for a PV however, I've run into an issue with this not always working depending on the order in which PVs are created. A simple example of the problem is:The idea is that if I do a put to
DEV:RW:DOUBLE5
this then triggers a put toDEV:RW:DOUBLE1
. However, if I do a put toDEV:RW:DOUBLE5
(from another terminal) the server gives this error:The odd thing is the above code works if I change the link to point to any of the other pvs (e.g.
DEV:RW:DOUBLE2
). This also works if I create another pv between pv1 and pv5. Diggging about in the code I found that I can fix this by changing the number of workers defined in the__init__
method of the class_DefaultWorkQueue
inp4p.util
, i.e. if I usedef __init__(self, workers=5):
It looks like this problem occurs for the specific configuration of having 5 pvs in a row with the last one linking to the first. I'm not sure if this is a bug or if there is a different way to call a put to a pv from within a handler. This issue may also be related to #144
The text was updated successfully, but these errors were encountered: