Implemented functionality that handles reconnecting to the amqp server
and reinitializing the amqp channel in case of errors and timeouts. This
is handled by a goroutine created in the client constructor (it also
handles the initial connect/init).
Reconnects and reinits will use a fibonacci backoff strategy, and the
attempt amount and max waiting interval can be adjusted by the
'reconnects' and 'max_reconnect_interval' config options.
Messages that fail processing are now dropped instead of being requeued,
preventing infinite processing loops. However, this means that the
messages are lost. Handling failed messages will need to be addressed
separately.
'concurrent_requests' will now set the prefetch count. Setting the
prefetch count using the Qos function was able to replace our old
approach that was using channels. Default value is 1024 which,
according to the rabbitmq docs, 'runs into the law of diminishing
returns'. The recommended value is between 100-300. Source:
https://www.rabbitmq.com/confirms.html#channel-qos-prefetch-throughput
Fix test compilation errors and failing tests caused by these changes.
References #4160
They are separate for each configured reader.
Additional changes:
- rearrange config_defaults fields for ers/ees;
- add comment for RunDelay config option inside struct definition;
- improve comments for amqp opts in config_defaults.
Behind http.Header is just a map and it's not safe for concurrent use.
Before this change, a panic might have occurred when doing asynchronous
HTTP exports (applies to both *http_post and *http_json_map exporters).
Cloning the header before adding it to the HTTP request has fixed this
issue.
Slightly improved the test that found this data race.
libengine_it_test.go:
- handle postgres dbtype as well
- after rpcclient update, we receive context deadline exceeded
instead of REPLY_TIMEOUT, so the expected value had to be
updated.
tut_smgeneric_it_test.go
- an additional attribute profile has been added to oldtutorial
tariffplans, which changed the amount of profiles and indexes
existing in the storage when testing. Expected values have been
updated.
- Set (but comment) serverAPI options (currently distinct api and
create.size BSON field are deprecated + possible others that are untested)
- Remove the custom time decoder used for mongo BSON
datetime values. The custom decoder was only converting these values
into UTC and was not any different from the default time.Time
decoder in the MongoDB driver, which also handles BSON string, int64,
and document values.
- Implement 'buildURL' function to connect to mongo (can also be
used for mysql and postgres)
- Update function names, variable names, and comments for clarity
- Replace 'bsonx.Regex' with the Regex primitive for v1.12 compatibility
- Use simple concatenation instead of Sprintf
- Declare 'decimalType' locally, replace global 'decimalType'
- Simplify several functions without altering functionality
- Converting directly from a D to an M is deprecated. We are now decoding
directly in a M.
- Used errors.As and errors.Is for proper error comparison and assertion
- Revised sloppy reassignments and added missing error checks
json.UnmarshalError happens only when we pass a non-nil pointer, which
is not the case for us. Usage of go vet helps us make sure it also
won't be happening in the future.
- use kraft instead of zookeeper
- add handlers in case of cfg changes
- create a separate user for the kafka service
- bump kafka version
- make the role more configurable