-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
crash in aggregator with flow data containing fields of type IPFIX_TYPE_string
or IPFIX_TYPE_octetArray
#108
Comments
Compiling with the following small diff to disable the InstanceManager it still crashes, just not in the aggregator hastable IM anymore. This time in deep in IpfixReceiverFile() buffer management code. diff --git a/src/core/InstanceManager.h b/src/core/InstanceManager.h
index b0788ef14f9b..75b6df6a19c9 100644
--- a/src/core/InstanceManager.h
+++ b/src/core/InstanceManager.h
@@ -30,6 +30,8 @@
#include <list>
#include <algorithm>
+#define IM_DISABLE 1
+
using namespace std;
/**
|
@nickbroon Can you provide a pcap file for a MWE? |
The 'ipfix.dump0000000000.gz' attachment to my first comment should provide a MWE. |
Oh, I overlooked that :) Thx! |
With the above config (and
So the question I guess is about copying |
Strangely the
It appears that vermont treats applicationId as an octetArray, and always treats octetArrays as variable length. (ie as having a length of 65535) In if (!fi->isVariableLength) {
fieldLength += fi->type.length;
}
// Variable length fields: Extract real length information
else if (fi->type == InformationElement::IeInfo(IPFIX_TYPEID_basicList, 0)) {
fi->basicListData.semantic = rf->semantic;
fi->basicListData.fieldIe = new InformationElement::IeInfo(rf->fieldIe);
// Length is one pointer, as we are storing data in dynamically allocated vector (i.e. pointer to vector)
fieldLength += sizeof(vector<void*>*);
} So the question is can it be extended to handle variable length octetArrays? It's worth noting that octetArrays fields need not always be variable length, and can be fixed length encoded using a size smaller than 65535 if they want. In the case of the sample data used here the record contains applicationID encoded with a fixed length of 5. Even if internally vermont treats all octetArrays as variable length, should it be possible to copy into it fixed length encoded otcetArray data? |
@nickbroon Thanks for taking the time to analyze this behavior! I think we need to fix several things, in order to add support for
|
@ogasser Thanks for the pointers. I'll use these to look at extending the support. First priority will be to induce a hard error when it attempts to parse any 'octetArray' or 'string' type field to avoid the crash. Then I'll look to have parse and aggregate octetArray (of which applicationId is one) and string type fields (they very similar, so should be treatable in the same way). |
IPFIX_TYPE_string
or IPFIX_TYPE_ocetArray
When you say "Use real length in BaseHashtable::createDataTemplate() for variable length IEs other than basicList" I'm not sure I fully understand. What is the real length for the Rule::Field in the case of a octetArray field like applicationId? In a aggregator rule there is no given length, so these are simply marked as 65535 in the Rule::Field. So I'm unsure how to allocate space for this field in hashtable. I suppose it could stored as pointer to dynamic memory, (like for the basicList) and then when actual flows are received for to be inserted into the hashtable, the length can be extracted from them and the memory allocated. (bearing in mind that the field in received flow might be fix length or variable length encode). |
You are completely right. Since we do not know how long variable length IEs will be, we store them dynamically. Therefore the length of that field is the length of one pointer, similar as for basicList. |
IPFIX_TYPE_string
or IPFIX_TYPE_ocetArray
IPFIX_TYPE_string
or IPFIX_TYPE_octetArray
As an alternative, instead of Vermont always treating fields of type IPFIX_TYPE_string or IPFIX_TYPE_octetArray as variable length encoded it could treat them fix length encoded, with either a hard coded length defined in the code (say 256), or allowing the length to be configured per field in the vermont config used to generate the aggregator rules template, and the field data truncated or zero extended to fill the encoded space. Examples might be:
I'll explore both options ,treating them as variable length encoded like basicList or configurable fixed length encode, and determine which is both best and easiest to implement. For a lot of typically short IPFIX_TYPE_string and IPFIX_TYPE_octetArray fields like applicationID and interfaceName collectors often expect to see fixed length encoded data. |
While examining the code related to flow aggregation I'm not sure I understand how flowKey/nonFlowKey interacts with the aggregation. The flowKey/nonFlowKey appears to only be used in Am I misunderstanding something? I simply don't see how flowKey/nonFlowKey configuration is effecting how flows are aggregated together when flow is found in the hash table. |
You are correct, the |
That's what I suspected. I'll do some testing to confirm this, but if it's the case it would seem to be a pretty serious bug in Vermont! |
Yes, this is definitely not what users expect when configuring the Vermont. |
Would this be something you'd have time to look into fixing? |
Unfortunately not in the coming weeks :( If you find the time, however, I would make sure to discuss and review with you the proposed fixes. |
Using the following very basic config containing an aggregator, and the attached basic ipfix dump of flows that contain the "applicationId (95)" field, vermont will crash, with the below backtrace in the aggregator hashtable.
Anyone familiar with aggregator/hashtable code able to easily diagnose the cause of the crash?
If the flow capture does not contain the applicationId (95) the aggregator does not crash, so perhaps it related to 'octetArray' field support, or that in this case the field has a length of 5?
vermont.xml.gz
ipfix.dump0000000000.gz
vermont.log.gz
The text was updated successfully, but these errors were encountered: