-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce CommandRouting service/component (replacing DeviceConnectionService) #2029
Comments
The architecture and command message flow would look like this then: Variation:If we use Infinispan anyway here (as storage for the mapping data currently still set by Advantage of this variation would be that the command routing would remain inside the Hono Kubernetes cluster (if the AMQP messaging network is external). Disadvantage would maybe be some added complexity for the communication between the Timeline:Introducing and using the
I think we could do 1. and 2. with Hono 1.3 already, then integrate the new functionality (3) in Hono 1.4. If we stay with the current definition of the Device Connection API as still "subject to change" (ie. not following the strict versioning rules, I think we defined that in a recent call), we could also do steps 4. and 5. in Hono 1.4. (Otherwise we could keep Hono 1.4 compatible with older adapters and remove things in 2.0.) @ctron, @dejanb, @sophokles73 What do you think (about architecture/variation and timeline) ? |
Based on the discussion in #2273, here is an updated design. With that, command routing on the As one protocol adapter shouldn't have to open connections with each CommandRouter instance, it has to be ensured that an incoming command arrives at the very CommandRouter instance that the command target protocol adapter has connected to. This is achieved by simply letting all commands be sent via multicast on the EDIT (2020-11-24): |
Looks good. Can we consider here switching "device connection management API" to HTTP at some point? I find it a better fit especially in k8s environment. It's not the necessary for the first phase of course. |
Note that based on http://qpid.apache.org/releases/qpid-dispatch-1.14.0/user-guide/index.html#configuring-routing-qdr an unsettled command message sent by an application will be settled with final state accepted if any of the receivers has accepted the message and none have rejected it. IMHO this means that if a command routing service instance which does not have a matching link MUST NOT accept such a message (and ignore it) but should instead release the command message in this case. You were probably already aware of this, I just wanted to mention it explicitly here for transparency. |
Yes.
Well, if the goal is to have an HTTP API, I think we can define the new "command routing API" as an HTTP API from the start. What I'm still thinking about is whether we should put a |
But, the protocol adapters need an AMQP connection to the CommandRouting component anyway (for receiving the commands). So, this connection (and the corresponding configuration) could be used for the API methods as well, not requiring the adapters to use 2 CommandRouting component related endpoint configurations (HTTP and AMQP). Therefore I think I would rather use AMQP here. |
Signed-off-by: Carsten Lohmann <[email protected]>
I've added a PR for a AMQP based API in #2280. |
Just to reiterate the next steps here: To use the new Command Router component, the adapters can be configured with a set of The Command Router component might internally still use the Device Connection API, but that's just an implementation detail. In a future Hono version we can then remove the Device Connection API. One downside of this approach would be that the protocol adapters can't use the HotRod based client to store the "device→adapter instance" mappings in a very efficient manner anymore - instead these mappings get stored via the Command Router component. That would also apply to the "lastKnownGateway" entries. But, IMHO the added latency isn't that important when registering consumers. Only the "lastKnownGateway" information gets set frequently, so this could add extra load on the Command Router component. But I don't know if that would be a reason to keep the Device Connection API with just the "lastKnownGateway" methods. We can still consider removing the "lastKnownGateway" usage altogether (see comment above). |
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
…uter. Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
…sts. Signed-off-by: Carsten Lohmann <[email protected]>
…sts. Signed-off-by: Carsten Lohmann <[email protected]>
…sts. Signed-off-by: Carsten Lohmann <[email protected]>
…sts. Signed-off-by: Carsten Lohmann <[email protected]>
…sts. Signed-off-by: Carsten Lohmann <[email protected]>
…sts. This adds a org.eclipse.hono.adapter.client.command.CommandConsumerFactory implementation that uses the new Command Router component. For the integration tests, a new maven profile 'command-router' is added to let the tests run using the Command Router component. The GitHub action workflow has been adapted to use that profile in the test-run that uses the jdbc device registry (so that the other test-runs still use the old command routing mechanism). Signed-off-by: Carsten Lohmann <[email protected]>
…sts. This adds a org.eclipse.hono.adapter.client.command.CommandConsumerFactory implementation that uses the new Command Router component. For the integration tests, a new maven profile 'command-router' is added to let the tests run using the Command Router component. The GitHub action workflow has been adapted to use that profile in the test-run that uses the jdbc device registry (so that the other test-runs still use the old command routing mechanism). Signed-off-by: Carsten Lohmann <[email protected]>
…ng used. When processing Command Router API requests, the original vert.x context of the CommandRouterAmqpServer needs to be restored at the end, because the commandConsumerFactory has switched to the context in which the downstream AMQP connection is used. Signed-off-by: Carsten Lohmann <[email protected]>
…sts. This adds a org.eclipse.hono.adapter.client.command.CommandConsumerFactory implementation that uses the new Command Router component. For the integration tests, a new maven profile 'command-router' is added to let the tests run using the Command Router component. The GitHub action workflow has been adapted to use that profile in the test-run that uses the jdbc device registry (so that the other test-runs still use the old command routing mechanism). Signed-off-by: Carsten Lohmann <[email protected]>
…ng used. The CommandRouterServiceImpl is no Verticle anymore, therefore there is no issue anymore with CommandRouterAmqpServer requests ending up being handled on the wrong vert.x context. Signed-off-by: Carsten Lohmann <[email protected]>
The CommandRouterServiceImpl is no Verticle anymore, therefore there is no issue anymore with CommandRouterAmqpServer requests ending up being handled on the wrong vert.x context. Signed-off-by: Carsten Lohmann <[email protected]>
…sts. This adds a org.eclipse.hono.adapter.client.command.CommandConsumerFactory implementation that uses the new Command Router component. For the integration tests, a new maven profile 'command-router' is added to let the tests run using the Command Router component. The GitHub action workflow has been adapted to use that profile in the test-run that uses the jdbc device registry (so that the other test-runs still use the old command routing mechanism). Signed-off-by: Carsten Lohmann <[email protected]>
This adds a org.eclipse.hono.adapter.client.command.CommandConsumerFactory implementation that uses the new Command Router component. For the integration tests, a new maven profile 'command-router' is added to let the tests run using the Command Router component. The GitHub action workflow has been adapted to use that profile in the test-run that uses the jdbc device registry (so that the other test-runs still use the old command routing mechanism). Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Signed-off-by: Carsten Lohmann <[email protected]>
Corresponding changes have been committed now. |
The current Command & Control implementation uses tenant scoped
command/[tenantId]
links to initially receive commands from a downstream application. After having received a command message on that link, the protocol adapter instance that the command target device is connected to is being identified and the command is sent/routed to that adapter instance.Currently, each adapter supports these tenant-scoped links and the corresponding routing logic.
This means added complexity for protocol adapter implementations and makes command processing also somewhat unintuitive (a command may be received first by an AMQP adapter and then routed to an MQTT adapter for example).
It also means that potential issues in a custom protocol adapter could have consequences for commands directed at one of the standard Hono protocol adapters (if these command messages are received by the custom protocol adapter first).
This all leads to the idea of introducing a separate component that will first receive all command messages and then route them to the appropriate protocol adapter instance.
Such a
CommandRouting
component could then include also the implementation of the currentDeviceConnectionService
. That means that theCommandRouting
implementation has the service to identify the target protocol adapter instance of a command message right inside its own component. Thinking further along this line, we can just as well replace theDeviceConnectionService
with theCommandRouting
service.Operations of the new
CommandRouting
service (to be used by the protocol adapters):setLastKnownGatewayForDevice(tenant, device, gateway)
registerCommandConsumer(tenant, device, consumerId, lifespan)
unregisterCommandConsumer(tenant, device, consumerId)
(
EDIT: usingconsumerId
here means the same as what was calledadapterInstanceId
in the DeviceConnectionAPI;adapterInstanceId
was a bit misleading, as there are usually multiple such consumers per adapter instance, corresponding to the number of vert.x verticle instancesadapterInstanceId
is probably more appropriate since we want to keep the number of related resources (e.g. Kafka topics) low. That requires mapping incoming commands to the vert.x verticles (via vert.x event bus), though.)For #1276, this method will probably be needed as well:
registerCommandConsumer(tenant, device, correlationId, consumerId, lifespan)
The
AdapterInstancesLivenessService
(#2028) will then also be incorporated into theCommandRouting
service. For that, an additional, optional operation could be added:ping(consumerId)
This would then be used by custom protocol adapters that are not in the same kubernetes cluster as the other adapters and the
CommandRouting
component.The text was updated successfully, but these errors were encountered: