port doc changes from v0.11

This commit is contained in:
ionutboangiu
2025-05-08 21:39:08 +03:00
committed by Dan Christian Bogos
parent 3acb9deac5
commit 6a2654e6d8
38 changed files with 2934 additions and 325 deletions

View File

@@ -1,5 +1,13 @@
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.11"
sphinx:
configuration: docs/conf.py
python:
install:
- requirements: docs/requirements.txt
- requirements: docs/requirements.txt

View File

@@ -19,4 +19,6 @@ Following *Agents* are implemented within CGRateS:
astagent
fsagent
kamagent
ers
ers
janusagent
prometheus

View File

@@ -3,5 +3,5 @@ API Calls
API calls are documented in the following GoDoc_
.. _GoDoc : https://godoc.org/github.com/cgrates/cgrates/apier
.. _GoDoc : https://pkg.go.dev/github.com/cgrates/cgrates/apier@master

View File

@@ -13,4 +13,4 @@ The CGRateS framework consists of functionality packed within **five** software
cgr-loader
cgr-migrator
cgr-tester

View File

@@ -3,7 +3,7 @@
AttributeS
==========
**AttributeS** is a standalone subsystem within **CGRateS** and it is the equivalent of a key-value store. It is accessed via `CGRateS RPC APIs <https://godoc.org/github.com/cgrates/cgrates/apier/>`_.
**AttributeS** is a standalone subsystem within **CGRateS** and it is the equivalent of a key-value store. It is accessed via `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_.
As most of the other subsystems, it is performance oriented, stored inside *DataDB* but cached inside the *cgr-engine* process.
Caching can be done dynamically/on-demand or at start-time/precached and it is configurable within *cache* section in the :ref:`JSON configuration <configuration>`.
@@ -58,9 +58,15 @@ Tenant
ID
Identifier for the *AttributeProfile*, unique within a *Tenant*
Context
A list of *contexts* applying to this profile. A *context* is usually associated with a logical phase during event processing (ie: *\*sessions* or *\*cdrs* for events parsed by :ref:`SessionS` or :ref:`CDRs`)
FilterIDs
List of *FilterProfiles* which should match in order to consider the *AttributeProfile* matching the event.
ActivationInterval
The time interval when this profile becomes active. If undefined, the profile is always active. Other options are start time, end time or both.
Blocker
In case of multiple *process runs* are allowed, this flag will break further processing.
@@ -94,13 +100,13 @@ Type
**\*composed**
Same as *\*variable* but instead of overwriting *Path*, it will append to it.
**\*usageDifference**
**\*usage_difference**
Will calculate the duration difference between two field names defined in the *Value*. If the number of fields in the *Value* are different than 2, it will error.
**\*sum**
Will sum up the values in the *Value*.
**\*valueExponent**
**\*value_exponent**
Will compute the exponent of the first field in the *Value*.
Value

View File

@@ -4,7 +4,7 @@ CDRs
====
**CDRs** is a standalone subsystem within **CGRateS** responsible to process *CDR* events. It is accessed via `CGRateS RPC APIs <https://godoc.org/github.com/cgrates/cgrates/apier/>`_ or separate *HTTP handlers* configured within *http* section inside :ref:`JSON configuration <configuration>`.
**CDRs** is a standalone subsystem within **CGRateS** responsible to process *CDR* events. It is accessed via `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_ or separate *HTTP handlers* configured within *http* section inside :ref:`JSON configuration <configuration>`.
Due to multiple interfaces exposed, the **CDRs** is designed to function as centralized server for *CDRs* received from various sources. Examples of such sources are:
*\*real-time events* from interfaces like *Diameter*, *Radius*, *Asterisk*, *FreeSWITCH*, *Kamailio*, *OpenSIPS*
@@ -48,9 +48,30 @@ stats_conns
Connections towards :ref:`StatS` component to compute stat metrics for CDR events. Empty to disable the functionality.
online_cdr_exports
List of :ref:`CDRe` profiles which will be processed for each CDR event. Empty to disable online CDR exports.
List of :ref:`EEs` profiles which will be processed for each CDR event. Empty to disable online CDR exports.
Export types
------------
There are two types of exports with common configuration but different data sources:
Online exports
^^^^^^^^^^^^^^
Are real-time exports, triggered by the CDR event processed by :ref:`CDRs`, and take these events as data source.
The *online exports* are enabled via *online_cdr_exports* :ref:`JSON configuration <configuration>` option within *cdrs*.
You can control the templates which are to be executed via the filters which are applied for each export template individually.
Offline exports
^^^^^^^^^^^^^^^
Are exports which are triggered via `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_ and they have as data source the CDRs stored within *StorDB*.
APIs logic
----------
@@ -79,7 +100,7 @@ Receives the CDR in the form of *CGRateS Event* together with processing flags a
Will store the *CDR* to *StorDB*. Defaults to *store_cdrs* parameter within :ref:`JSON configuration <configuration>`. If store process fails for one of the CDRs, an automated refund is performed for all derived.
\*export
Will export the event matching export profiles. These profiles are defined within *cdre* section inside :ref:`JSON configuration <configuration>`. Defaults to *true* if there is at least one *online_cdr_exports* profile configured within :ref:`JSON configuration <configuration>`.
Will export the event matching export profiles. These profiles are defined within *ees* section inside :ref:`JSON configuration <configuration>`. Defaults to *true* if there is at least one *online_cdr_exports* profile configured within :ref:`JSON configuration <configuration>`.
\*thresholds
Will process the event with the :ref:`ThresholdS`, allowing us to execute actions based on filters set for matching profiles. Defaults to *true* if there are connections towards :ref:`ThresholdS` within :ref:`JSON configuration <configuration>`.

View File

@@ -13,10 +13,18 @@ Configurable via command line arguments.
Usage of cgr-console:
-ca_path string
path to CA for tls connection(only for self sign certificate)
-connect_attempts int
Connect attempts (default 3)
-connect_timeout int
Connect timeout in seconds (default 1)
-crt_path string
path to certificate for tls connection
-key_path string
path to key for tls connection
-max_reconnect_interval int
Maximum reconnect interval
-reconnects int
Reconnect attempts (default 3)
-reply_timeout int
Reply timeout in seconds (default 300)
-rpc_encoding string
@@ -31,4 +39,5 @@ Configurable via command line arguments.
Prints the application version.
.. hint:: # cgr-console status

View File

@@ -9,36 +9,43 @@ Customisable through the use of *json* :ref:`JSON configuration <configuration>`
Able to read the configuration from either a local directory of *.json* files with an unlimited number of subfolders (ordered alphabetically) or a list of http paths (separated by ";").
::
$ cgr-engine -help
Usage of cgr-engine:
-config_path string
Configuration directory path. (default "/etc/cgrates/")
Configuration directory path (default "/etc/cgrates/")
-cpuprof_dir string
write cpu profile to files
-httprof_path string
http address used for program profiling
Directory for CPU profiles
-log_level int
Log level (0-emergency to 7-debug) (default -1)
Log level (0=emergency to 7=debug) (default -1)
-logger string
logger <*syslog|*stdout>
Logger type <*syslog|*stdout>
-memprof_dir string
write memory profile to file
Directory for memory profiles
-memprof_interval duration
Time betwen memory profile saves (default 5s)
-memprof_nrfiles int
Number of memory profile to write (default 1)
Interval between memory profile saves (default 15s)
-memprof_maxfiles int
Number of memory profiles to keep (most recent) (default 1)
-memprof_timestamp
Add timestamp to memory profile files
-node_id string
The node ID of the engine
Node ID of the engine
-pid string
Write pid file
-scheduled_shutdown string
shutdown the engine after this duration
Path to write the PID file
-preload string
Loader IDs used to load data before engine starts
-print_config
Print configuration object in JSON format
-scheduled_shutdown duration
Shutdown the engine after the specified duration
-set_versions
Overwrite database versions (equivalent to cgr-migrator -exec=*set_versions)
-singlecpu
Run on single CPU core
Run on a single CPU core
-version
Prints the application version.
Print application version and exit
.. hint:: $ cgr-engine -config_path=/etc/cgrates
@@ -61,12 +68,13 @@ The components from the diagram can be found documented in the links bellow:
sessions
rals
cdrs
cdre
ees
attributes
chargers
resources
routes
stats
trends
thresholds
filters
dispatchers
@@ -74,6 +82,8 @@ The components from the diagram can be found documented in the links bellow:
apiers
loaders
caches
guardian
datadb
stordb
rpcconns

View File

@@ -23,6 +23,8 @@ Customisable through the use of :ref:`JSON configuration <configuration>` or com
CacheS component to contact for cache reloads, empty to disable automatic cache reloads (default "*localhost")
-caching string
Caching strategy used when loading TP
-caching_delay duration
Adds delay before cache reload
-config_path string
Configuration directory path.
-datadb_host string
@@ -34,7 +36,7 @@ Customisable through the use of :ref:`JSON configuration <configuration>` or com
-datadb_port string
The DataDb port to bind to. (default "6379")
-datadb_type string
The type of the DataDB database <*redis|*mongo> (default "redis")
The type of the DataDB database <*redis|*mongo> (default "*redis")
-datadb_user string
The DataDb user to sign in as. (default "cgrates")
-dbdata_encoding string
@@ -51,20 +53,38 @@ Customisable through the use of :ref:`JSON configuration <configuration>` or com
Load the tariff plan from storDb to dataDb
-import_id string
Uniquely identify an import/load, postpended to some automatic fields
-mongoConnScheme string
Scheme for MongoDB connection <mongodb|mongodb+srv> (default "mongodb")
-mongoQueryTimeout duration
The timeout for queries (default 10s)
-path string
The path to folder containing the data files (default "./")
-recursive
Loads data from folder recursive.
-redisCACertificate string
Path to the CA certificate
-redisClientCertificate string
Path to the client certificate
-redisClientKey string
Path to the client key
-redisCluster
Is the redis datadb a cluster
-redisClusterOndownDelay duration
The delay before executing the commands if the redis cluster is in the CLUSTERDOWN state
-redisClusterSync duration
The sync interval for the redis cluster (default 5s)
-redisConnectAttempts int
The maximum amount of dial attempts (default 20)
-redisConnectTimeout duration
The amount of wait time until timeout for a connection attempt
-redisMaxConns int
The connection pool size (default 10)
-redisReadTimeout duration
The amount of wait time until timeout for reading operations
-redisSentinel string
The name of redis sentinel
-redisCluster bool
Is the redis datadb a cluster
-cluster_sync string
The sync interval for the redis cluster
-cluster_ondown_delay string
The delay before executing the commands if thredis cluster is in the CLUSTERDOWN state
-mongoQueryTimeout string
The timeout for queries
-redisTLS
Enable TLS when connecting to Redis
-redisWriteTimeout duration
The amount of wait time until timeout for writing operations
-remove
Will remove instead of adding data from DB
-route_id string
@@ -78,15 +98,17 @@ Customisable through the use of :ref:`JSON configuration <configuration>` or com
-stordb_name string
The name/number of the storDb to connect to. (default "cgrates")
-stordb_passwd string
The storDb user's password.
The storDb user's password. (default "CGRateS.org")
-stordb_port string
The storDb port to bind to. (default "3306")
-stordb_type string
The type of the storDb database <*mysql|*postgres|*mongo> (default "mysql")
The type of the storDb database <*mysql|*postgres|*mongo> (default "*mysql")
-stordb_user string
The storDb user to sign in as. (default "cgrates")
-tenant string
(default "cgrates.org")
-timezone string
Timezone for timestamps where not specified <""|UTC|Local|$IANA_TZ_DB>
Timezone for timestamps where not specified <""|UTC|Local|$IANA_TZ_DB> (default "Local")
-to_stordb
Import the tariff plan from files to storDb
-tpid string
@@ -94,4 +116,4 @@ Customisable through the use of :ref:`JSON configuration <configuration>` or com
-verbose
Enable detailed verbose logging output
-version
Prints the application version.
Prints the application version.

View File

@@ -22,7 +22,7 @@ Customisable through the use of :ref:`JSON configuration <configuration>` or com
-datadb_port string
the DataDB port (default "6379")
-datadb_type string
the type of the DataDB Database <*redis|*mongo> (default "redis")
the type of the DataDB Database <*redis|*mongo> (default "*redis")
-datadb_user string
the DataDB user (default "cgrates")
-dbdata_encoding string
@@ -31,11 +31,17 @@ Customisable through the use of :ref:`JSON configuration <configuration>` or com
parse loaded data for consistency and errors, without storing it
-exec string
fire up automatic migration <*set_versions|*cost_details|*accounts|*actions|*action_triggers|*action_plans|*shared_groups|*filters|*stordb|*datadb>
-mongoConnScheme string
Scheme for MongoDB connection <mongodb|mongodb+srv> (default "mongodb")
-mongoQueryTimeout duration
The timeout for queries (default 10s)
-out_datadb_encoding string
the encoding used to store object Data in strings in move mode (default "*datadb")
-out_datadb_host string
output DataDB host to connect to (default "*datadb")
-out_datadb_name string
output DataDB name/number (default "*datadb")
-out_datadb_passwd string
-out_datadb_password string
output DataDB password (default "*datadb")
-out_datadb_port string
output DataDB port (default "*datadb")
@@ -43,15 +49,13 @@ Customisable through the use of :ref:`JSON configuration <configuration>` or com
output DataDB type <*redis|*mongo> (default "*datadb")
-out_datadb_user string
output DataDB user (default "*datadb")
-out_dbdata_encoding string
the encoding used to store object Data in strings in move mode (default "*datadb")
-out_redis_sentinel string
the name of redis sentinel (default "*datadb")
-out_stordb_host string
output StorDB host (default "*stordb")
-out_stordb_name string
output StorDB name/number (default "*stordb")
-out_stordb_passwd string
-out_stordb_password string
output StorDB password (default "*stordb")
-out_stordb_port string
output StorDB port (default "*stordb")
@@ -59,26 +63,42 @@ Customisable through the use of :ref:`JSON configuration <configuration>` or com
output StorDB type for move mode <*mysql|*postgres|*mongo> (default "*stordb")
-out_stordb_user string
output StorDB user (default "*stordb")
-redisCACertificate string
Path to the CA certificate
-redisClientCertificate string
Path to the client certificate
-redisClientKey string
Path to the client key
-redisCluster
Is the redis datadb a cluster
-redisClusterOndownDelay duration
The delay before executing the commands if the redis cluster is in the CLUSTERDOWN state
-redisClusterSync duration
The sync interval for the redis cluster (default 5s)
-redisConnectAttempts int
The maximum amount of dial attempts (default 20)
-redisConnectTimeout duration
The amount of wait time until timeout for a connection attempt
-redisMaxConns int
The connection pool size (default 10)
-redisReadTimeout duration
The amount of wait time until timeout for reading operations
-redisSentinel string
the name of redis sentinel
-redisCluster bool
Is the redis datadb a cluster
-cluster_sync string
The sync interval for the redis cluster
-cluster_ondown_delay string
The delay before executing the commands if thredis cluster is in the CLUSTERDOWN state
-mongoQueryTimeout string
The timeout for queries
-redisTLS
Enable TLS when connecting to Redis
-redisWriteTimeout duration
The amount of wait time until timeout for writing operations
-stordb_host string
the StorDB host (default "127.0.0.1")
-stordb_name string
the name/number of the StorDB (default "cgrates")
-stordb_passwd string
the StorDB password
the StorDB password (default "CGRateS.org")
-stordb_port string
the StorDB port (default "3306")
-stordb_type string
the type of the StorDB Database <*mysql|*postgres|*mongo> (default "mysql")
the type of the StorDB Database <*mysql|*postgres|*mongo> (default "*mysql")
-stordb_user string
the StorDB user (default "cgrates")
-verbose

View File

@@ -9,10 +9,14 @@ Command line stress testing tool configurable via command line arguments.
$ cgr-tester -h
Usage of cgr-tester:
-calls int
run n number of calls (default 100)
-category string
The Record category to test. (default "call")
-config_path string
Configuration directory path.
-cps int
run n requests in parallel (default 100)
-cpuprofile string
write cpu profile to file
-datadb_host string
@@ -24,44 +28,70 @@ Command line stress testing tool configurable via command line arguments.
-datadb_port string
The DataDb port to bind to. (default "6379")
-datadb_type string
The type of the DataDb database <redis> (default "redis")
The type of the DataDb database <redis> (default "*redis")
-datadb_user string
The DataDb user to sign in as. (default "cgrates")
-dbdata_encoding string
The encoding used to store object data in strings. (default "msgpack")
-destination string
The destination to use in queries. (default "1002")
-digits int
Number of digits Account and Destination will have (default 10)
-exec string
Pick what you want to test <*sessions|*cost>
-file_path string
read requests from file with path
-json
Use JSON RPC
-max_usage duration
Maximum usage a session can have (default 5s)
-memprofile string
write memory profile to this file
-parallel int
run n requests in parallel
-min_usage duration
Minimum usage a session can have (default 1s)
-mongoConnScheme string
Scheme for MongoDB connection <mongodb|mongodb+srv> (default "mongodb")
-mongoQueryTimeout duration
The timeout for queries (default 10s)
-rater_address string
Rater address for remote tests. Empty for internal rater.
-redisCluster
Is the redis datadb a cluster
-redisClusterOndownDelay duration
The delay before executing the commands if the redis cluster is in the CLUSTERDOWN state
-redisClusterSync duration
The sync interval for the redis cluster (default 5s)
-redisConnectAttempts int
The maximum amount of dial attempts (default 20)
-redisConnectTimeout duration
The amount of wait time until timeout for a connection attempt
-redisMaxConns int
The connection pool size (default 10)
-redisReadTimeout duration
The amount of wait time until timeout for reading operations
-redisSentinel string
The name of redis sentinel
-redisCluster bool
Is the redis datadb a cluster
-cluster_sync string
The sync interval for the redis cluster
-cluster_ondown_delay string
The delay before executing the commands if thredis cluster is in the CLUSTERDOWN state
-mongoQueryTimeout string
The timeout for queries
-redisWriteTimeout duration
The amount of wait time until timeout for writing operations
-req_separator string
separator for requests in file (default "\n\n")
-request_type string
Request type of the call (default "*rated")
-runs int
stress cycle number (default 100000)
-subject string
The rating subject to use in queries. (default "1001")
-tenant string
The type of record to use in queries. (default "cgrates.org")
-timeout duration
After last call, time out after this much duration (default 10s)
-tor string
The type of record to use in queries. (default "*voice")
-update_interval duration
Time duration added for each session update (default 1s)
-usage string
The duration to use in call simulation. (default "1m")
-verbose
Enable detailed verbose logging output
-version
Prints the application version.

View File

@@ -5,7 +5,7 @@ ChargerS
**ChargerS** is a **CGRateS** subsystem designed to produce billing runs via *DerivedCharging* mechanism.
It works as standalone component of **CGRateS**, accessible via `CGRateS RPC <https://godoc.org/github.com/cgrates/cgrates/apier/>`_ via a rich set of *APIs*. As input **ChargerS** is capable of receiving generic events (hashmaps) with dynamic types for fields.
It works as standalone component of **CGRateS**, accessible via `CGRateS RPC <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_ via a rich set of *APIs*. As input **ChargerS** is capable of receiving generic events (hashmaps) with dynamic types for fields.
**ChargerS** is an **important** part of the charging process within **CGRateS** since with no *ChargingProfile* matching, there will be no billing run performed.
@@ -19,7 +19,7 @@ Is a process of receiving an event as input and *deriving* that into multiples (
Processing logic
----------------
For the received *Event* we will retrieve the list of matching *ChargingProfiles' via :ref:`FilterS`. These profiles will be then ordered based on their *Weight* - higher *Weight* will have more priority. If no profile will match due to *Filter*, *NOT_FOUND* will be returned back to the RPC client.
For the received *Event* we will retrieve the list of matching *ChargingProfiles' via :ref:`FilterS`. These profiles will be then ordered based on their *Weight* - higher *Weight* will have more priority. If no profile will match due to *Filter* or *ActivationInterval*, *NOT_FOUND* will be returned back to the RPC client.
Each *ChargingProfile* matching the *Event* will produce a standalone event based on configured *RunID*. These events will each have a special field added (or overwritten), the *RunID*, which is taken from the applied *ChargingProfile*.
@@ -43,6 +43,9 @@ ID
FilterIDs
List of *FilterProfiles* which should match in order to consider the ChargerProfile matching the event.
ActivationInterval
Is the time interval when this profile becomes active. If undefined, the profile is always active. Other options are start time, end time or both.
RunID
The identifier for a single bill run / charged output *Event*.

View File

@@ -25,7 +25,12 @@ import sys, os
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.todo']
extensions = [
'sphinx.ext.todo',
'sphinx_copybutton',
'sphinx_tabs.tabs',
'sphinx_rtd_theme'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -41,16 +46,16 @@ master_doc = 'index'
# General information about the project.
project = u'CGRateS'
copyright = u'2012-2020, ITsysCOM GmbH'
copyright = u'2012-2023, ITsysCOM GmbH'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.0'
version = '0.11'
# The full version, including alpha/beta/rc tags.
release = '1.0~dev'
release = '0.11.0~dev'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
@@ -91,7 +96,7 @@ pygments_style = 'sphinx'
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
@@ -217,3 +222,4 @@ man_pages = [
# extensions = ['autoapi.extension']
# autoapi_type = 'go'
# autoapi_dirs = ['../apier/v1']

View File

@@ -3,5 +3,289 @@
DataDB
======
**DataDB** is the subsystem within **CGRateS** responsible for storing runtime data like accounts, rating plans, and other objects that the engine needs for its operation. It supports various database backends to fit different deployment needs.
TBD
Database Types
--------------
DataDB supports the following database types through the ``db_type`` parameter:
* ``*redis``: Uses Redis as the storage backend
* ``*mongo``: Uses MongoDB as the storage backend
* ``*internal``: Uses in-memory storage within the CGRateS process
When using ``*internal`` as the ``db_type``, **CGRateS** leverages your machine's memory to store all **DataDB** records directly inside the engine. This drastically increases read/write performance, as no data leaves the process, avoiding the overhead associated with external databases. The configuration supports periodic data dumps to disk to enable persistence across reboots.
The internal database is ideal for:
* Environments with extremely high performance requirements
* Systems where external database dependencies should be avoided
* Lightweight deployments or containers requiring self-contained runtime
* Temporary setups or testing environments
Remote and Replication Functionality
------------------------------------
Remote Functionality
~~~~~~~~~~~~~~~~~~~~
DataDB supports fetching data from remote CGRateS instances when items are not found locally. This allows for distributed setups where data can be stored across multiple instances.
To use remote functionality:
1. Define RPC connections to remote engines
2. Configure which data items should be fetched remotely by setting ``remote: true`` for specific items
3. Optionally set an identifier for the local connection with ``remote_conn_id``
Replication
~~~~~~~~~~~
DataDB supports replicating data changes to other CGRateS instances. When enabled, modifications (Set/Remove operations) are propagated to configured remote engines, ensuring data consistency across a distributed deployment.
Unlike the remote functionality which affects Get operations, replication applies to Set and Remove operations, pushing changes outward to other nodes.
Configuration
-------------
A complete DataDB configuration includes database connection details, remote functionality settings, replication options, and item-specific settings. For reference, the full default configuration can be found in the :ref:`configuration` section.
.. code-block:: json
"data_db": {
"db_type": "*redis",
"db_host": "127.0.0.1",
"db_port": 6379,
"db_name": "10",
"db_user": "cgrates",
"db_password": "",
"remote_conns": ["engine2", "engine3"],
"remote_conn_id": "engine1",
"replication_conns": ["engine2", "engine3"],
"replication_filtered": false,
"replication_cache": "",
"replication_failed_dir": "/var/lib/cgrates/failed_replications",
"replication_interval": "1s",
"items": {
"*accounts": {"limit": -1, "ttl": "", "static_ttl": false, "remote": false, "replicate": true},
"*rating_plans": {"limit": -1, "ttl": "", "static_ttl": false, "remote": true, "replicate": true}
// Other items...
},
"opts": {
// Database-specific options...
}
}
Parameters
----------
Basic Connection
~~~~~~~~~~~~~~~~
db_type
The database backend to use. Values: <*redis|*mongo|*internal>
db_host
Database host address (e.g., "127.0.0.1")
db_port
Port to reach the database (e.g., 6379 for Redis)
db_name
Database name to connect to (e.g., "10" for Redis database number)
db_user
Username for database authentication
db_password
Password for database authentication
Remote Functionality
~~~~~~~~~~~~~~~~~~~~
remote_conns
Array of connection IDs (defined in rpc_conns) that will be queried when items are not found locally
remote_conn_id
Identifier sent to remote connections to identify this engine
Replication Parameters
~~~~~~~~~~~~~~~~~~~~~~
replication_conns
Array of connection IDs (defined in rpc_conns) to which data will be replicated
replication_filtered
When enabled, replication occurs only to connections that previously received a Get request for the item. Values: <true|false>
replication_cache
Caching action to execute on replication targets when items are replicated
replication_failed_dir
Directory to store failed batch replications when using intervals. This directory must exist before launching CGRateS.
replication_interval
Interval between batched replications:
- Empty/0: Immediate replication after each operation
- Duration (e.g., "1s"): Batches replications and sends them at the specified interval
Items Configuration
~~~~~~~~~~~~~~~~~~~
DataDB manages multiple data types through the ``items`` map, with these configuration options for each item:
limit
Maximum number of items of this type to store. -1 means no limit. Only applies to *internal database.
ttl
Time-to-live for items before automatic removal. Empty string means no expiration. Only applies to *internal database.
static_ttl
Controls TTL behavior. When true, TTL is fixed from initial creation. When false, TTL resets on each update. Only applies to *internal database.
remote
When true, enables fetching this item type from remote connections if not found locally.
replicate
When true, enables replication of this item type to configured remote connections.
Internal Database Options
~~~~~~~~~~~~~~~~~~~~~~~~~
When using ``*internal`` as the database type, additional options are available in the ``opts`` section:
internalDBDumpPath
Defines the path to the folder where the memory-stored **DataDB** will be dumped. This path is also used for recovery during engine startup. Ensure the folder exists before launching the engine.
internalDBBackupPath
Path where backup copies of the dump folder will be stored. Backups are triggered via the `APIerSv1.BackupDataDBDump <https://pkg.go.dev/github.com/cgrates/cgrates@master/engine#InternalDB.BackupDataDB>`_ API call. This API can also specify a custom path for backups, otherwise the default `internalDBBackupPath` is used. Backups serve as a fallback in case of dump file corruption or loss. The created folders are timestamped in UNIX time for easy identification of the latest backup. To recover using a backup, simply transfer the folders from a backup in internalDBBackupPath to internalDBDumpPath and start the engine. If backups are zipped, they need to be unzipped manually when restoring.
internalDBStartTimeout
Specifies the maximum amount of time the engine will wait to recover the in-memory **DataDB** state from the dump files during startup. If this duration is exceeded, the engine will timeout and an error will be returned.
internalDBDumpInterval
Specifies the time interval at which **DataDB** will be dumped to disk. This duration should be chosen based on the machine's capacity and data load. If the interval is set too long and a lot of data changes during that period, the dumping process will take longer, and in the event of an engine crash, any data not dumped will be lost. Conversely, if the interval is too short, and a high number of queries are done often to **DataDB**, some of the needed processing power for the queries will be used by the dump process. Since machine resources and data loads vary, it is recommended to simulate the load on your system and determine the optimal "sweet spot" for this interval. At engine shutdown, any remaining undumped data will automatically be written to disk, regardless of the interval setting.
- Setting the interval to ``0s`` disables the periodic dumping, meaning any data in **DataDB** will be lost when the engine shuts down.
- Setting the interval to ``-1`` enables immediate dumping—whenever a record in **DataDB** is added, changed, or removed, it will be dumped to disk immediately.
Manual dumping can be triggered using the `APIerSv1.DumpDataDB <https://pkg.go.dev/github.com/cgrates/cgrates@master/engine#InternalDB.DumpDataDB>`_ API.
internalDBRewriteInterval
Defines the interval for rewriting files that are not currently being used for dumping data, converting them into an optimized, streamlined version and improving recovery time. Similar to ``internalDBDumpInterval``, the rewriting will trigger based on specified intervals:
- Setting the interval ``0s`` disables rewriting.
- Setting the interval ``-1`` triggers rewriting only once when the engine starts.
- Setting the interval ``-2`` triggers rewriting only once when the engine shuts down.
Rewriting should be used sparingly, as the process temporarily loads the entire ``internalDBDumpPath`` folder into memory for optimization, and then writes it back to the dump folder once done. This results in a surge of memory usage, which could amount to the size of the dump file itself during the rewrite. As a rule of thumb, expect the engine's memory usage to approximately double while the rewrite process is running. Manual rewriting can be triggered at any time via the `APIerSv1.RewriteDataDB <https://pkg.go.dev/github.com/cgrates/cgrates@master/engine#InternalDB.RewriteDataDB>`_ API.
internalDBFileSizeLimit
Specifies the maximum size a single dump file can reach. Upon reaching the limit, a new dump file is created. Limiting file size improves recovery time and allows for limit reached files to be rewritten.
Redis-Specific Options
~~~~~~~~~~~~~~~~~~~~~~
The following options in the ``opts`` section apply when using Redis:
redisMaxConns
Connection pool size
redisConnectAttempts
Maximum number of connection attempts
redisSentinel
Sentinel name when using Redis Sentinel
redisCluster
Enables Redis Cluster mode
redisClusterSync
Sync interval for Redis Cluster
redisClusterOndownDelay
Delay before executing commands when Redis Cluster is in CLUSTERDOWN state
redisConnectTimeout, redisReadTimeout, redisWriteTimeout
Timeout settings for various Redis operations
redisTLS, redisClientCertificate, redisClientKey, redisCACertificate
TLS configuration for secure Redis connections
MongoDB-Specific Options
~~~~~~~~~~~~~~~~~~~~~~~~
The following options in the ``opts`` section apply when using MongoDB:
mongoQueryTimeout
Timeout for MongoDB queries
mongoConnScheme
Connection scheme for MongoDB (<mongodb|mongodb+srv>)
Configuration Examples
----------------------
Persistent Internal Database
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: json
"data_db": {
"db_type": "*internal",
"opts": {
"internalDBDumpPath": "/var/lib/cgrates/internal_db/datadb",
"internalDBBackupPath": "/var/lib/cgrates/internal_db/backup/datadb",
"internalDBStartTimeout": "5m",
"internalDBDumpInterval": "1m",
"internalDBRewriteInterval": "15m",
"internalDBFileSizeLimit": "1GB"
}
}
Replication Setup
~~~~~~~~~~~~~~~~~
First, define connections to the engines you want to replicate to:
.. code-block:: json
"rpc_conns": {
"rpl_engine": {
"conns": [
{
"address": "127.0.0.1:2012",
"transport": "*json",
"connect_attempts": 5,
"reconnects": -1,
"max_reconnect_interval": "",
"connect_timeout": "1s",
"reply_timeout": "2s"
}
]
}
}
Then configure DataDB replication (showing only replication-related parameters):
.. code-block:: json
"data_db": {
"replication_conns": ["rpl_engine"],
"replication_failed_dir": "/var/lib/cgrates/failed_replications",
"replication_interval": "1s",
"items": {
"*accounts": {"replicate": true},
"*reverse_destinations": {"replicate": true},
"*destinations": {"replicate": true},
"*rating_plans": {"replicate": true}
// Other items...
}
}
Notes
-----
* By default, both replication and remote functionality are disabled for all items and must be explicitly enabled by setting ``replicate: true`` or ``remote: true`` for each desired item
* When using replication with intervals, make sure to configure a ``replication_failed_dir`` to handle failed replications
* Failed replications can be manually replayed using the `APIerSv1.ReplayFailedReplications <https://pkg.go.dev/github.com/cgrates/cgrates@master/apier/v1#APIerSv1.ReplayFailedReplications>`_ API call
* Remote functionality and replication can be used independently or together, depending on your deployment needs

View File

@@ -251,13 +251,13 @@ flags
**\*none**
Disable transfering the request from *Diameter* to *CGRateS* side. Used mostly to pasively answer *Diameter* requests or troubleshoot (mostly in combination with *\*log* flag).
**\*dryRun**
**\*dryrun**
Together with not transfering the request on CGRateS side will also log the *Diameter* request/reply, useful for troubleshooting.
**\*auth**
Sends the request for authorization on CGRateS.
Auxiliary flags available: **\*attributes**, **\*thresholds**, **\*stats**, **\*resources**, **\*accounts**, **\*routes**, **\*rouIgnoreErrors**, **\*routesEventCost**, **\*rouMaxCost** which are used to influence the auth behavior on CGRateS side. More info on that can be found on the **SessionS** component's API behavior.
Auxiliary flags available: **\*attributes**, **\*thresholds**, **\*stats**, **\*resources**, **\*accounts**, **\*routes**, **\*routes_ignore_errors**, **\*routes_event_cost**, **\*routes_maxcost** which are used to influence the auth behavior on CGRateS side. More info on that can be found on the **SessionS** component's API behavior.
**\*initiate**
Initiates a session out of request on CGRateS side.
@@ -277,7 +277,7 @@ flags
**\*message**
Process the request as individual message charging on CGRateS side.
Auxiliary flags available: **\*attributes**, **\*thresholds**, **\*stats**, **\*resources**, **\*accounts**, **\*routes**, **\*rouIgnoreErrors**, **\*routesEventCost**, **\*rouMaxCost** which are used to influence the behavior on CGRateS side.
Auxiliary flags available: **\*attributes**, **\*thresholds**, **\*stats**, **\*resources**, **\*accounts**, **\*routes**, **\*routes_ignore_errors**, **\*routes_event_cost**, **\*routes_maxcost** which are used to influence the behavior on CGRateS side.
**\*event**
@@ -286,7 +286,7 @@ flags
Auxiliary flags available: all flags supported by the "SessionSv1.ProcessEvent" generic API
**\*cdrs**
Build a CDR out of the request on CGRateS side. Can be used simultaneously with other flags (except *\*dry_run)
Build a CDR out of the request on CGRateS side. Can be used simultaneously with other flags (except **\*dryrun**)
path
@@ -328,10 +328,10 @@ type
**\*group**
Writes out the variable value, postpending to the list of variables with the same path
**\*usageDifference**
**\*usage_difference**
Calculates the usage difference between two arguments passed in the *value*. Requires 2 arguments: *$stopTime;$startTime*
**\*ccUsage**
**\*cc_usage**
Calculates the usage out of *CallControl* message. Requires 3 arguments: *$reqNumber;$usedCCTime;$debitInterval*
**\*sum**
@@ -340,7 +340,7 @@ type
**\*difference**
Calculates the difference between all arguments passed within *value*. Possible value types are (in this order): duration, time, float, int.
**\*valueExponent**
**\*value_exponent**
Calculates the exponent of a value. It requires two values: *$val;$exp*
**\*template**

View File

@@ -1,5 +1,207 @@
.. _dispatchers:
DispatcherS
===========
**DispatcherS** is the **CGRateS** component that handles request routing and load balancing. When enabled, it manages all requests to other CGRateS subsystems by wrapping their methods with additional features like :ref:`authorization <dispatcher-authorization>`, request routing, load balancing, broadcasting, and loop prevention.
TBD
Processing Logic
----------------
.. _dispatcher-authorization:
Authorization
~~~~~~~~~~~~~
Optional step when :ref:`AttributeS <attributes>` connections are configured:
- Looks for ``*apiKey`` in APIOpts (returns mandatory missing error if not present)
- Sends key to AttributeS, which adds it as "APIKey" to the Event
- AttributeS processes the event and adds allowed methods to APIMethods (e.g., "method1&method2&method3")
- Checks if the requested method is in the allowed list
- Continues only after successful authorization
Dispatch
~~~~~~~~
The dispatcher processes requests through these steps:
* Check for bypass conditions:
* Presence of ``*dispatchers: false`` in APIOpts
* Request source is another dispatcher and ``prevent_loops`` is enabled
* Check cached routes:
* Search for ``*routeID`` in APIOpts
* If found, use cached dispatch data (tenant, profile ID, host ID)
* Fall back to full dispatch on network errors or timeouts
* Run full dispatch sequence:
* Get matching dispatcher profiles
* Try each profile until dispatch succeeds
.. _dispatcher-types:
Dispatcher Types and Strategies
-------------------------------
Load Dispatchers
~~~~~~~~~~~~~~~~
Used for ratio-based request distribution. Hosts are sorted in three steps:
1. Initial sorting by weight
2. Secondary sorting by load ratio (current active requests/configured ratio), where lower ratios have priority
3. Final sorting based on the specified strategy:
* ``*random``: Randomizes host selection
* ``*round_robin``: Sequential host selection with weight consideration
* ``*weight``: Skips final sorting, maintains weight and load-based ordering
Configuration through:
- ``*defaultRatio`` in StrategyParams
- Direct ratio specification in Host configuration
Simple Dispatchers
~~~~~~~~~~~~~~~~~~
Standard request distribution where hosts are sorted first by weight, followed by the chosen strategy (*random, *round_robin, *weight).
Broadcast Dispatchers
~~~~~~~~~~~~~~~~~~~~~
Handles scenarios requiring multi-host distribution. Supports three broadcast strategies:
* ``*broadcast``: Sends to all hosts, uses first response
* ``*broadcast_sync``: Sends to all hosts, waits for all responses
* ``*broadcast_async``: Sends to all hosts without waiting (fire-and-forget)
Parameters
----------
Configure the dispatcher in the **dispatchers** section of the :ref:`JSON configuration <configuration>`:
enabled
Enables/disables the DispatcherS component. Values: <true|false>
indexed_selects
Enables profile matching exclusively on indexes for improved performance
string_indexed_fields
Fields used for string-based index querying
prefix_indexed_fields
Fields used for prefix-based index querying
suffix_indexed_fields
Fields used for suffix-based index querying
nested_fields
Controls indexed filter matching depth. Values: <true|false>
- true: checks all levels
- false: checks only first level
attributes_conns
Connections to :ref:`AttributeS <attributes>` for API authorization
- Empty: disables authorization
- "*internal": uses internal connection
- Custom connection ID
any_subsystem
Enables matching of *any subsystem. Values: <true|false>
prevent_loops
Prevents request loops between dispatcher nodes. Values: <true|false>
DispatcherHost
~~~~~~~~~~~~~~
Defines individual dispatch destinations with the following parameters:
Tenant
The tenant on the platform
ID
Unique identifier for the host
Address
Host address (use *internal for internal connections)
Transport
Protocol used for communication (*gob, *json)
ConnectAttempts
Number of connection attempts
Reconnects
Maximum number of reconnection attempts
MaxReconnectInterval
Maximum interval between reconnection attempts
ConnectTimeout
Connection timeout (e.g., "1m")
ReplyTimeout
Response timeout (e.g., "2m")
TLS
TLS connection settings:
- ClientKey: Path to client key file
- ClientCertificate: Path to client certificate
- CaCertificate: Path to CA certificate
DispatcherProfile
~~~~~~~~~~~~~~~~~
Defines routing rules and strategies. See :ref:`dispatcher-types` for available strategies.
Tenant
The tenant on the platform
ID
Profile identifier
Subsystems
Target subsystems (*any for all)
FilterIDs
List of filters for request matching
ActivationInterval
Time interval when profile is active
Strategy
Dispatch strategy (*weight, *random, *round_robin, *broadcast, *broadcast_sync)
StrategyParameters
Additional strategy configuration (e.g., *default_ratio)
ConnID
Target host identifier
ConnFilterIDs
Filters for connection selection
ConnWeight
Priority weight for connection selection within the profile
ConnBlocker
Blocks connection if true
ConnParameters
Additional connection parameters (e.g., *ratio)
Weight
Priority weight used when selecting between multiple matching profiles
Use Cases
---------
- Load balancing between multiple CGRateS nodes
- High availability setups with automatic failover
- Request authorization and access control
- Broadcasting requests for data collection
- Traffic distribution based on weight or custom metrics
- System scaling and performance optimization

251
docs/ees.rst Normal file
View File

@@ -0,0 +1,251 @@
.. _AMQP: https://www.amqp.org/
.. _SQS: https://aws.amazon.com/de/sqs/
.. _S3: https://aws.amazon.com/de/s3/
.. _Kafka: https://kafka.apache.org/
.. _EEs:
EEs
====
**EventExporterService/EEs** is a subsystem designed to convert internal, already processed events into external ones and then export them to a defined destination. It is accessible via `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_.
Configuration
-------------
**EEs** is configured within **ees** section from :ref:`JSON configuration <configuration>`.
Config params
^^^^^^^^^^^^^
Most of the parameters are explained in :ref:`JSON configuration <configuration>`, hence we mention here only the ones where additional info is necessary or there will be particular implementation for *EventExporterService*.
One **exporters** instance includes the following parameters:
id
Exporter identificator, used mostly for debug. The id should be unique per each exporter since it can influence updating configuration from different *.json* configuration.
type
Specify the type of export which will run. Possible values are:
**\*file_csv**
Exports into a comma separated file format.
**\*file_fwv**
Exports into a fixed width file format.
**\*http_post**
Will post the CDR to a HTTP server. The export content will be a HTTP form encoded representation of the `internal CDR object <https://godoc.org/github.com/cgrates/cgrates/engine#CDR>`_.
**\*http_json_map**
Will post the CDR to a HTTP server. The export content will be a JSON serialized hmap with fields defined within the *fields* section of the template.
**\*amqp_json_map**
Will post the CDR to an AMQP_ queue. The export content will be a JSON serialized hmap with fields defined within the *fields* section of the template. Uses AMQP_ protocol version 1.0.
**\*amqpv1_json_map**
Will post the CDR to an AMQP_ queue. The export content will be a JSON serialized hmap with fields defined within the *fields* section of the template. Uses AMQP_ protocol version 1.0.
**\*sqs_json_map**
Will post the CDR to an `Amazon SQS queue <SQS>`_. The export content will be a JSON serialized hmap with fields defined within the *fields* section of the template.
**\*s3_json_map**
Will post the CDR to `Amazon S3 storage <S3>`_. The export content will be a JSON serialized hmap with fields defined within the *fields* section of the template.
**\*kafka_json_map**
Will post the CDR to an `Apache Kafka <Kafka>`_. The export content will be a JSON serialized hmap with fields defined within the *fields* section of the template.
**\*nats_json_map**
Exporter for publishing messages to NATS (Message Queue) in JSON format.
**\*virt**
In-memory exporter.
**\*els**
Exporter for Elasticsearch.
**\*sql**
Exporter for generic content to *SQL* databases. Supported databases are: MySQL_, PostgreSQL_ and MSSQL_.
**\*rpc**
Exporter for calling APIs through node connections.
export_path
Specify the export path. It has special format depending of the export type.
**\*file_csv**, **\*file_fwv**
Standard unix-like filesystem path.
**\*http_post**, **\*http_json_map**
Full HTTP URL
**\*amqp_json_map**, **\*amqpv1_json_map**
AMQP URL with extra parameters.
Sample: *amqp://guest:guest@localhost:5672/?queue_id=cgrates_cdrs&exchange=exchangename&exchange_type=fanout&routing_key=cgr_cdrs*
**\*sqs_json_map**
SQS URL with extra parameters.
Sample: *http://sqs.eu-west-2.amazonaws.com/?aws_region=eu-west-2&aws_key=testkey&aws_secret=testsecret&queue_id=cgrates-cdrs*
**\*s3_json_map**
S3 URL with extra parameters.
Sample: *http://s3.us-east-2.amazonaws.com/?aws_region=eu-west-2&aws_key=testkey&aws_secret=testsecret&queue_id=cgrates-cdrs*
**\*kafka_json_map**
Kafka URL with extra parameters.
Sample: *localhost:9092?topic=cgrates_cdrs*
**\*sql**
SQL URL with extra parameters.
Sample: *mysql://cgrates:CGRateS.org@127.0.0.1:3306*
**\*nats**
NATS URL.
Sample: *nats://localhost:4222*
**\*els**
Elasticsearch URL
Sample: *http://localhost:9200*
filters
List of filters to pass for the export profile to execute. For the dynamic content (prefixed with *~*) following special variables are available:
**\*req**
The *CDR* event itself.
**\*ec**
The *EventCost* object with subpaths for all of it's nested objects.
tenant
Tenant owning the template. It will be used mostly to match inside :ref:`FilterS`.
synchronous
Block further exports until this one finishes. In case of *false* the control will be given to the next export template as soon as this one was started.
attempts
Number of attempts before giving up on the export and writing the failed request to file. The failed request will be written to *failed_posts_dir*.
fields
List of fields for the exported event.
One **field template** will contain the following parameters:
path
Path for the exported content. Possible prefixes here are:
*\*exp*
Reference to the exported record.
*\*hdr*
Reference to the header content. Available in case of **\*file_csv** and **\*file_fwv** export types.
*\*trl*
Reference to the trailer content. Available in case of **\*file_csv** and **\*file_fwv** export types.
type
The field type will give out the logic for generating the value. Values used depend on the type of prefix used in path.
For *\*exp*, following field types are implemented:
**\*variable**
Writes out the variable value, overwriting previous one set.
**\*composed**
Writes out the variable value, postpending to previous value set
**\*filler**
Fills the values with a fixed lentgh string.
**\*constant**
Writes out a constant
**\*datetime**
Parses the value as datetime and reformats based on the *layout* attribute.
**\*combimed**
Writes out a combined mediation considering events with the same *CGRID*.
**\*masked_destination**
Masks the destination using *\** as suffix. Matches the destination field against the list defined via *mask_destinationd_id* field.
**\*http_post**
Uses a HTTP server as datasource for the value exported.
For *\*hdr* and *\*trl*, following field types are possible:
**\*filler**
Fills the values with a string.
**\*constant**
Writes out a constant
**\*handler**
Will obtain the content via a handler. This works in tandem with the attribute *handler_id*.
value
The exported value. Works in tandem with *type* attribute. Possible prefixes for dynamic values:
**\*req**
Data is taken from the current request coming from the *CDRs* component.
mandatory
Makes sure that the field cannot have empty value (errors otherwise).
tag
Used for debug purposes in logs.
width
Used to control the formatting, enforcing the final value to a specific number of characters.
strip
Used when the value is higher than *width* allows it, specifying the strip strategy. Possible values are:
**\*right**
Strip the suffix.
**\*xright**
Strip the suffix, postpending one *x* character to mark the stripping.
**\*left**
Strip the prefix.
**\*xleft**
Strip the prefix, prepending one *x* character to mark the stripping.
padding
Used to control the formatting. Applied when the data is smaller than the *width*. Possible values are:
**\*right**
Suffix with spaces.
**\*left**
Prefix with spaces.
**\*zeroleft**
Prefix with *0* chars.
mask_destinationd_id
The destinations profile where we match the *masked_destinations*.
hander_id
The identifier of the handler to be executed in case of *\*handler* *type*.

View File

@@ -38,7 +38,7 @@ With explanations in the comments:
"id": "file_reader2", // file_reader2 reader
"run_delay": "-1", // reading of events it is triggered outside of ERs
"field_separator": ";", // field separator definition
"type": "*fileCSV", // type of reader, *fileCSV can read .csv files
"type": "*file_csv", // type of reader, *file_csv can read .csv files
"row_length" : 0, // Number of fields from csv file
"flags": [ // influence processing logic within CGRateS workflow
"*cdrs", // *cdrs will create CDRs
@@ -46,7 +46,7 @@ With explanations in the comments:
],
"source_path": "/tmp/ers2/in", // location of the files
"processed_path": "/tmp/ers2/out", // move the files here once processed
"content_fields":[ // mapping definition between line index in the file and CGRateS field
"fields":[ // mapping definition between line index in the file and CGRateS field
{
"tag": "OriginID", // OriginID together with OriginHost will
"path": "*cgreq.OriginID", // uniquely identify the session on CGRateS side
@@ -131,16 +131,16 @@ id
type
Reader type. Following types are implemented:
**\*fileCSV**
**\*file_csv**
Reader for *comma separated* files.
**\*fileXML**
**\*file_xml**
Reader for *.xml* formatted files.
**\*fileFWV**
**\*file_fwv**
Reader for *fixed width value* formatted files.
**\*kafkaJSONMap**
**\*kafka_json_map**
Reader for hashmaps within Kafka_ database.
**\*sql**
@@ -158,7 +158,7 @@ source_path
processed_path
Optional path for moving the events source to after processing.
xmlRootPath
xml_root_path
Used in case of XML content and will specify the prefix path applied to each xml element read.
tenant
@@ -180,10 +180,10 @@ filters
Request read from the source. In case of file content without field name, the index will be passed instead of field source path.
**\*hdr**
Header values (available only in case of *\*fileFWV*). In case of file content without field name, the index will be passed instead of field source path.
Header values (available only in case of *\*file_fwv*). In case of file content without field name, the index will be passed instead of field source path.
**\*trl**
Trailer values (available only in case of *\*fileFWV*). In case of file content without field name, the index will be passed instead of field source path.
Trailer values (available only in case of *\*file_fwv*). In case of file content without field name, the index will be passed instead of field source path.
flags
Special tags enforcing the actions/verbs done on an event. There are two types of flags: **main** and **auxiliary**.
@@ -202,13 +202,13 @@ flags
**\*none**
Disable transfering the Event from *Reader* to *CGRateS* side.
**\*dryRun**
**\*dryrun**
Together with not transfering the Event on CGRateS side will also log it, useful for troubleshooting.
**\*auth**
Sends the Event for authorization on CGRateS.
Auxiliary flags available: **\*attributes**, **\*thresholds**, **\*stats**, **\*resources**, **\*accounts**, **\*routes**, **\*rouIgnoreErrors**, **\*routesEventCost**, **\*rouMaxCost** which are used to influence the auth behavior on CGRateS side. More info on that can be found on the **SessionS** component's API behavior.
Auxiliary flags available: **\*attributes**, **\*thresholds**, **\*stats**, **\*resources**, **\*accounts**, **\*routes**, **\*routes_ignore_errors**, **\*routes_event_cost**, **\*routes_maxcost** which are used to influence the auth behavior on CGRateS side. More info on that can be found on the **SessionS** component's API behavior.
**\*initiate**
Initiates a session out of Event on CGRateS side.
@@ -228,7 +228,7 @@ flags
**\*message**
Process the Event as individual message charging on CGRateS side.
Auxiliary flags available: **\*attributes**, **\*thresholds**, **\*stats**, **\*resources**, **\*accounts**, **\*routes**, **\*rouIgnoreErrors**, **\*routesEventCost**, **\*rouMaxCost** which are used to influence the behavior on CGRateS side.
Auxiliary flags available: **\*attributes**, **\*thresholds**, **\*stats**, **\*resources**, **\*accounts**, **\*routes**, **\*routes_ignore_errors**, **\*routes_event_cost**, **\*routes_maxcost** which are used to influence the behavior on CGRateS side.
**\*event**
Process the Event as generic event on CGRateS side.
@@ -236,7 +236,7 @@ flags
Auxiliary flags available: all flags supported by the "SessionSv1.ProcessEvent" generic API
**\*cdrs**
Build a CDR out of the Event on CGRateS side. Can be used simultaneously with other flags (except *\*dry_run)
Build a CDR out of the Event on CGRateS side. Can be used simultaneously with other flags (except **\*dryrun**)
path
Defined within field, specifies the path where the value will be written. Possible values:
@@ -248,10 +248,10 @@ path
Write the value in the request object which will be sent to CGRateS side.
**\*hdr**
Header values (available only in case of *\*fileFWV*). In case of file content without field name, the index will be passed instead of field source path.
Header values (available only in case of *\*file_fwv*). In case of file content without field name, the index will be passed instead of field source path.
**\*trl**
Trailer values (available only in case of *\*fileFWV*). In case of file content without field name, the index will be passed instead of field source path.
Trailer values (available only in case of *\*file_fwv*). In case of file content without field name, the index will be passed instead of field source path.
type
@@ -272,7 +272,7 @@ type
**\*composed**
Writes out the variable value, postpending to previous value set
**\*usageDifference**
**\*usage_difference**
Calculates the usage difference between two arguments passed in the *value*. Requires 2 arguments: *$stopTime;$startTime*
**\*sum**
@@ -281,7 +281,7 @@ type
**\*difference**
Calculates the difference between all arguments passed within *value*. Possible value types are (in this order): duration, time, float, int.
**\*valueExponent**
**\*value_exponent**
Calculates the exponent of a value. It requires two values: *$val;$exp*
**\*template**

View File

@@ -21,6 +21,7 @@ Definition::
Tenant string
ID string
Rules []*FilterRule
ActivationInterval *utils.ActivationInterval
}
A Filter profile can be shared between multiple subsystem profile definitions.
@@ -77,6 +78,12 @@ The following types are implemented:
\*notexists
Is the negation of *\*exists*.
\*timings
Will compare the time contained in *Element* with one of the TimingIDs defined in Values.
\*nottimings
Is the negation of *\*timings*.
\*destinations
Will make sure that the *Element* is a prefix contained inside one of the destination IDs as *Values*.

39
docs/guardian.rst Normal file
View File

@@ -0,0 +1,39 @@
.. _guardian:
Guardian
========
Guardian is CGRateS' internal locking mechanism that ensures data consistency during concurrent operations.
What Guardian Does
------------------
Guardian prevents race conditions when multiple processes try to access or modify the same data. It uses string-based locks, typically created using some variation of the tenant and ID of the resource being protected, often with a type prefix. Guardian can use either explicit IDs or generate UUIDs internally for reference-based locking when no specific ID is provided.
When CGRateS Uses Guardian
--------------------------
Guardian protects:
* Account balance operations (debits/topups) - the most critical use case
* ResourceProfiles, Resources, StatQueueProfiles, StatQueues, ThresholdProfiles, and Thresholds while they're being used or loaded into cache
* Filter index updates
* There are other cases, but the ones listed above are the most frequent applications
Performance Implications
------------------------
Guardian affects system performance in these ways:
* Operations on the same resource are processed one after another, not simultaneously
* Under heavy load on the same resources, operations may queue up and wait
* System throughput is better when operations are distributed across different resources
Configuration
-------------
Guardian has a single configuration option:
The `locking_timeout` setting in the general configuration determines how long Guardian will hold a lock before forcing it to release. Zero timeout (no timeout) is the default and recommended setting. However, setting a reasonable timeout can help prevent system hangs if a process fails to release a lock.
When a timeout occurs, Guardian logs a warning and forces the lock to release. This keeps the system running, but the operation that timed out may fail.

View File

@@ -10,14 +10,15 @@ Welcome to `CGRateS`_'s documentation!
.. toctree::
:maxdepth: 4
overview
architecture
installation
configuration
administration
advanced
tutorials
tutorial
troubleshooting
miscellaneous

View File

@@ -1,270 +1,383 @@
.. _Docker: https://docs.docker.com/get-started/get-docker/
.. _Redis: https://redis.io/
.. _MySQL: https://dev.mysql.com/
.. _PostgreSQL: https://www.postgresql.org/
.. _MongoDB: https://www.mongodb.com/
.. _installation:
Installation
============
CGRateS can be installed via packages as well as Go automated source install.
We recommend using source installs for advanced users familiar with Go programming and packages for users not willing to be involved in the code building process.
.. contents::
:local:
:depth: 2
CGRateS can be installed either via packages or through an automated Go source installation. We recommend the latter for advanced users familiar with Go programming, and package installations for those not wanting to engage in the code building process.
After completing the installation, you need to perform the :ref:`post-install configuration <post_install>` steps to set up CGRateS properly and prepare it to run. After these steps, CGRateS will be configured in **/etc/cgrates/cgrates.json** and the service can be managed using the **systemctl** command.
Package Installation
--------------------
Package installation method varies according to the Linux distribution:
Debian or Debian-based Distributions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can add the CGRateS repository to your system's sources list, depending of the Debian version you are running, as follows:
.. tabs::
.. group-tab:: Bookworm
.. code-block:: bash
# Install dependencies
sudo apt-get install wget gnupg -y
# Download and move the GPG Key to the trusted area
wget https://apt.cgrates.org/apt.cgrates.org.gpg.key -O apt.cgrates.org.asc
sudo mv apt.cgrates.org.asc /etc/apt/trusted.gpg.d/
# Add the repository to the apt sources list
echo "deb http://apt.cgrates.org/debian/ master-bookworm main" | sudo tee /etc/apt/sources.list.d/cgrates.list
# Update the system repository and install CGRateS
sudo apt-get update -y
sudo apt-get install cgrates -y
Alternatively, you can manually install a specific .deb package as follows:
.. code-block:: bash
wget http://pkg.cgrates.org/deb/master/bookworm/cgrates_current_amd64.deb
sudo dpkg -i ./cgrates_current_amd64.deb
.. group-tab:: Bullseye
.. code-block:: bash
# Install dependencies
sudo apt-get install wget gnupg -y
# Download and move the GPG Key to the trusted area
wget https://apt.cgrates.org/apt.cgrates.org.gpg.key -O apt.cgrates.org.asc
sudo mv apt.cgrates.org.asc /etc/apt/trusted.gpg.d/
# Add the repository to the apt sources list
echo "deb http://apt.cgrates.org/debian/ master-bullseye main" | sudo tee /etc/apt/sources.list.d/cgrates.list
# Update the system repository and install CGRateS
sudo apt-get update -y
sudo apt-get install cgrates -y
Alternatively, you can manually install a specific .deb package as follows:
.. code-block:: bash
wget http://pkg.cgrates.org/deb/master/bullseye/cgrates_current_amd64.deb
sudo dpkg -i ./cgrates_current_amd64.deb
Using packages
--------------
Depending on the packaged distribution, the following methods are available:
.. note::
A complete archive of CGRateS packages is available at http://pkg.cgrates.org/deb/.
Debian
^^^^^^
Redhat-based Distributions
^^^^^^^^^^^^^^^^^^^^^^^^^^
There are two main ways of installing the maintained packages:
For .rpm distros, we are using copr to manage the CGRateS packages:
- If using a version of Linux with dnf:
.. code-block:: bash
# sudo yum install -y dnf-plugins-core on RHEL 8 or CentOS Stream
sudo dnf install -y dnf-plugins-core
sudo dnf copr -y enable cgrates/master
sudo dnf install -y cgrates
- For older distributions:
.. code-block:: bash
sudo yum install -y yum-plugin-copr
sudo yum copr -y enable cgrates/master
sudo yum install -y cgrates
To install a specific version of the package, run:
.. code-block:: bash
sudo dnf install -y cgrates-<version>.x86_64
Alternatively, you can manually install a specific .rpm package as follows:
.. code-block:: bash
wget http://pkg.cgrates.org/rpm/nightly/epel-9-x86_64/cgrates-current.rpm
sudo dnf install ./cgrates_current.rpm
Aptitude repository
~~~~~~~~~~~~~~~~~~~
.. note::
The entire archive of CGRateS rpm packages is available at https://copr.fedorainfracloud.org/coprs/cgrates/master/packages/ or http://pkg.cgrates.org/rpm/nightly/.
Installing from Source
----------------------
Add the gpg key:
Prerequisites:
^^^^^^^^^^^^^^
::
- **Git**
sudo wget -O - https://apt.cgrates.org/apt.cgrates.org.gpg.key | sudo apt-key add -
.. code-block:: bash
Add the repository in apt sources list:
sudo apt-get install -y git
# sudo dnf install -y git for .rpm distros
::
- **Go** (refer to the official Go installation docs: https://go.dev/doc/install)
echo "deb http://apt.cgrates.org/debian/ nightly main" | sudo tee /etc/apt/sources.list.d/cgrates.list
To install the latest Go version at the time of writing this documentation, run:
Update & install:
::
sudo apt-get update
sudo apt-get install cgrates
Once the installation is completed, one should perform the :ref:`post_install` section in order to have the CGRateS properly set and ready to run.
After *post-install* actions are performed, CGRateS will be configured in **/etc/cgrates/cgrates.json** and enabled in **/etc/default/cgrates**.
Manual installation of .deb package out of archive server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Run the following commands:
::
wget http://pkg.cgrates.org/deb/nightly/cgrates_current_amd64.deb
dpkg -i cgrates_current_amd64.deb
As a side note on http://pkg.cgrates.org/deb/ one can find an entire archive of CGRateS packages.
Redhat/Fedora/CentOS
^^^^^^^^^^^^^^^^^^^^
There are two main ways of installing the maintained packages:
YUM repository
~~~~~~~~~~~~~~
To install CGRateS out of YUM execute the following commands
::
sudo tee -a /etc/yum.repos.d/cgrates.repo > /dev/null <<EOT
[cgrates]
name=CGRateS
baseurl=http://yum.cgrates.org/yum/nightly/
enabled=1
gpgcheck=1
gpgkey=https://yum.cgrates.org/yum.cgrates.org.gpg.key
EOT
sudo yum update
sudo yum install cgrates
Once the installation is completed, one should perform the :ref:`post_install` section in order to have the CGRateS properly set and ready to run.
After *post-install* actions are performed, CGRateS will be configured in **/etc/cgrates/cgrates.json** and enabled in **/etc/default/cgrates**.
Manual installation of .rpm package out of archive server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Run the following command:
::
sudo rpm -i http://pkg.cgrates.org/rpm/nightly/cgrates_current.rpm
As a side note on http://pkg.cgrates.org/rpm/ one can find an entire archive of CGRateS packages.
Using source
------------
For developing CGRateS and switching between its versions, we are using the **go mods feature** introduced in go 1.13.
.. _InstallGO:
Install GO Lang
^^^^^^^^^^^^^^^
First we have to setup the GO Lang to our OS. Feel free to download
the latest GO binary release from https://golang.org/dl/
In this Tutorial we are going to install Go 1.24
::
.. code-block:: bash
sudo apt-get install -y wget tar
# sudo dnf install -y wget tar for .rpm distros
sudo rm -rf /usr/local/go
cd /tmp
wget https://go.dev/dl/go1.24.linux-amd64.tar.gz
sudo tar -xvf go1.24.linux-amd64.tar.gz -C /usr/local/
export PATH=$PATH:/usr/local/go/bin:$HOME/go/bin
wget https://go.dev/dl/go1.24.2.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.24.2.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
Installation:
^^^^^^^^^^^^^
Build CGRateS from Source
^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash
Configure the project with the following commands:
::
go get github.com/cgrates/cgrates
mkdir -p $HOME/go/src/github.com/cgrates/cgrates
git clone https://github.com/cgrates/cgrates.git $HOME/go/src/github.com/cgrates/cgrates
cd $HOME/go/src/github.com/cgrates/cgrates
# Compile the binaries and move them to $GOPATH/bin
./build.sh
# Create a symbolic link to the data folder
sudo ln -s $HOME/go/src/github.com/cgrates/cgrates/data /usr/share/cgrates
Create Debian / Ubuntu Packages from Source
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# Make cgr-engine binary available system-wide
sudo ln -s $HOME/go/bin/cgr-engine /usr/bin/cgr-engine
After compiling the source code you are ready to create the .deb packages
for your Debian like OS. First lets install some dependencies:
# Optional: Additional useful symbolic links
sudo ln -s $HOME/go/bin/cgr-loader /usr/bin/cgr-loader
sudo ln -s $HOME/go/bin/cgr-migrator /usr/bin/cgr-migrator
sudo ln -s $HOME/go/bin/cgr-console /usr/bin/cgr-console
sudo ln -s $HOME/go/bin/cgr-tester /usr/bin/cgr-tester
Installing Using Docker
-----------------------
CGRateS is also available as Docker images.
Prerequisites
^^^^^^^^^^^^^
- `Docker`_
Pull Docker Images
^^^^^^^^^^^^^^^^^^
The following commands will pull the CGRateS components:
::
sudo apt-get install build-essential fakeroot dh-systemd
sudo docker pull dkr.cgrates.org/master/cgr-engine
sudo docker pull dkr.cgrates.org/master/cgr-loader
sudo docker pull dkr.cgrates.org/master/cgr-migrator
sudo docker pull dkr.cgrates.org/master/cgr-console
sudo docker pull dkr.cgrates.org/master/cgr-tester
Finally we are ready to create the system package. Before creation we make
sure that we delete the old one first.
Verify the images were pulled successfully:
::
sudo docker images dkr.cgrates.org/master/cgr-*
REPOSITORY TAG IMAGE ID CREATED SIZE
dkr.cgrates.org/master/cgr-loader latest 5b667e92a475 6 weeks ago 46.5MB
dkr.cgrates.org/master/cgr-console latest 464bd1992ab2 6 weeks ago 103MB
dkr.cgrates.org/master/cgr-engine latest e20f43491aa8 6 weeks ago 111MB
...
.. note::
While other version-specific tags are available, we recommend using the default **latest** tag for most use cases.
You can check available versions with:
::
curl -X GET https://dkr.cgrates.org/v2/master/cgr-engine/tags/list
cgr-engine Container
^^^^^^^^^^^^^^^^^^^^
The current cgr-engine container entrypoint is:
::
[
"/usr/bin/cgr-engine",
"-logger=*stdout"
]
.. note::
Verify the entrypoint configuration with:
::
sudo docker inspect --format='{{json .Config.Entrypoint}}' dkr.cgrates.org/master/cgr-engine:latest
Running cgr-engine
^^^^^^^^^^^^^^^^^^
Here's a basic example of running cgr-engine with common Docker parameters:
::
sudo docker run --rm \
-v /path/on/host:/etc/cgrates \
-p 2012:2012 \
-e DOCKER_IP=127.0.0.1 \
-e REDIS_HOST=192.168.122.91 \
--network bridge \
--name cgr-engine \
dkr.cgrates.org/master/cgr-engine:latest \
-config_path=/etc/cgrates \
-logger=*stdout
Verify cgr-engine is responding:
::
sudo docker run --rm \
--name cgr-console \
--network host \
dkr.cgrates.org/master/cgr-console:latest \
status
Key parameters:
- ``--rm``: automatically remove container when it exits
- ``-v``: mount host directory into container (format: host_path:container_path)
- ``-p``: publish container port to host (format: host_port:container_port)
- ``-e``: set environment variables (optional, only needed if referenced in configuration files)
- ``--network``: specify container networking mode (bridge for isolation, host for direct host network access)
- ``--name``: assign name to container
.. note::
The ``-config_path`` and ``-logger`` flags above are cgr-engine specific flags and optional, as those values are already the defaults.
Creating Your Own Packages
--------------------------
After compiling the source code, you may choose to create your own packages.
For Debian-based distros:
^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash
# Install dependencies
sudo apt-get install build-essential fakeroot dh-systemd -y
cd $HOME/go/src/github.com/cgrates/cgrates/packages
# Delete old ones, if any
rm -rf $HOME/go/src/github.com/cgrates/*.deb
make deb
After some time and maybe some console warnings, your CGRateS package will be ready.
.. note::
You might see some console warnings, which can be safely ignored.
To install the generated package, run:
Install Custom Debian / Ubuntu Package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
.. code-block:: bash
cd $HOME/go/src/github.com/cgrates
sudo dpkg -i cgrates_*.deb
For Redhat-based distros:
^^^^^^^^^^^^^^^^^^^^^^^^^
Generate RPM Packages from Source
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash
Prerequisites
* :ref:`Install Golang <InstallGO>`
* Git
sudo dnf install -y rpm-build wget curl tar
::
# Create build directories
mkdir -p $HOME/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
sudo apt-get install git
# Fetch source code
cd $HOME/go/src/github.com/cgrates/cgrates
export gitLastCommit=$(git rev-parse HEAD)
export rpmTag=$(git log -1 --format=%ci | date +%Y%m%d%H%M%S)+$(git rev-parse --short HEAD)
#Create the tarball from the source code
cd ..
tar -czvf $HOME/rpmbuild/SOURCES/$gitLastCommit.tar.gz cgrates
* RPM
::
sudo apt-get install rpm
Execute the following commands
::
cd $HOME/go/src/github.com/cgrates/cgrates
export gitLastCommit=$(git rev-parse HEAD)
export rpmTag=$(git log -1 --format=%ci | date +%Y%m%d%H%M%S)+$(git rev-parse --short HEAD)
mkdir -p $HOME/cgr_build/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
wget -P $HOME/cgr_build/SOURCES https://github.com/cgrates/cgrates/archive/$gitLastCommit.tar.gz
cp $HOME/go/src/github.com/cgrates/cgrates/packages/redhat_fedora/cgrates.spec $HOME/cgr_build/SPECS
cd $HOME/cgr_build
rpmbuild -bb --define "_topdir $HOME/cgr_build" SPECS/cgrates.spec
# Copy RPM spec file
cp $HOME/go/src/github.com/cgrates/cgrates/packages/redhat_fedora/cgrates.spec $HOME/rpmbuild/SPECS
# Build RPM package
cd $HOME/rpmbuild
rpmbuild -bb SPECS/cgrates.spec
.. _post_install:
Post-install
------------
Post-install Configuration
--------------------------
Database setup
Database Setup
^^^^^^^^^^^^^^
For its operation CGRateS uses **one or more** database types, depending on its nature, install and configuration being further necessary.
CGRateS supports multiple database types for various operations, based on your installation and configuration.
At present we support the following databases:
Currently, we support the following databases:
`Redis`_
Can be used as :ref:`DataDB`.
Optimized for real-time information access.
Once installed there should be no special requirements in terms of setup since no schema is necessary.
This can be used as :ref:`DataDB`. It is optimized for real-time information access. Post-installation, no additional setup is required as Redis doesn't require a specific schema.
`MySQL`_
Can be used as :ref:`StorDB`.
Optimized for CDR archiving and offline Tariff Plan versioning.
Once MySQL is installed, CGRateS database needs to be set-up out of provided scripts. (example for the paths set-up by debian package)
This can be used as :ref:`StorDB` and is optimized for CDR archiving and offline Tariff Plan versioning. Post-installation, you need to set up the CGRateS database using the provided scripts:
::
.. code-block:: bash
cd /usr/share/cgrates/storage/mysql/
./setup_cgr_db.sh root CGRateS.org localhost
cd /usr/share/cgrates/storage/mysql/
sudo ./setup_cgr_db.sh root CGRateS.org localhost
`PostgreSQL`_
Can be used as :ref:`StorDB`.
Optimized for CDR archiving and offline Tariff Plan versioning.
Once PostgreSQL is installed, CGRateS database needs to be set-up out of provided scripts (example for the paths set-up by debian package).
Like MySQL, PostgreSQL can be used as :ref:`StorDB`. Post-installation, you need to set up the CGRateS database using the provided scripts:
::
.. code-block:: bash
cd /usr/share/cgrates/storage/postgres/
./setup_cgr_db.sh
cd /usr/share/cgrates/storage/postgres/
./setup_cgr_db.sh
`MongoDB`_
Can be used as :ref:`DataDB` as well as :ref:`StorDB`.
It is the first database that can be used to store all kinds of data stored from CGRateS from accounts, tariff plans to cdrs and logs.
Once MongoDB is installed, CGRateS database needs to be set-up out of provided scripts (example for the paths set-up by debian package)
MongoDB can be used as both :ref:`DataDB` and :ref:`StorDB`. This is the first database that can store all types of data from CGRateS - from accounts, tariff plans to CDRs and logs. Post-installation, you need to set up the CGRateS database using the provided scripts:
::
.. code-block:: bash
cd /usr/share/cgrates/storage/mongo/
./setup_cgr_db.sh
cd /usr/share/cgrates/storage/mongo/
./setup_cgr_db.sh
Set versions data
Set Versions Data
^^^^^^^^^^^^^^^^^
Once database setup is completed, we need to write the versions data. To do this, run migrator tool with the parameters specific to your database.
After completing the database setup, you need to write the versions data. To do this, run the migrator tool with the parameters specific to your database.
Sample usage for MySQL:
::
.. code-block:: bash
cgr-migrator -stordb_passwd="CGRateS.org" -exec="*set_versions"

39
docs/janusagent.rst Normal file
View File

@@ -0,0 +1,39 @@
.. _JanusAgent:
JanusAgent
=============
**JanusAgent** is an api endpoint that connects to JanusServer through **CGRateS**.
It authorizes webrtc events in **CGRateS** for each user and after managing and creating sessions in JanusServer.
The **JanusAgent** is configured within *janus_agent* section from :ref:`JSON configuration <configuration>`.
It will listen on http port 2080 in /janus endpoint as specified in config ,it will accept same http requests that would be sent normally to JanusServer.
Sample config
::
"janus_agent": {
"enabled": false, // enables the Janus agent: <true|false>
"url": "/janus",
"sessions_conns": ["*internal"],
"janus_conns": [{ // instantiate connections to multiple Janus Servers
"address": "127.0.0.1:8088", // janus API address
"type": "*ws", // type of the transport to interact via janus API
"admin_address": "localhost:7188", // janus admin address used to retrive more information for sessions and handles
"admin_password": "", // secret to pass restriction to communicate to the endpoint
}],
"request_processors": [], // request processors to be applied to Janus messages
},
Config params
^^^^^^^^^^^^^
Most of the parameters are explained in :ref:`JSON configuration <configuration>`, hence we mention here only the ones where additional info is necessary or there will be particular implementation for *JanusAgent*.
Software Installation
---------------------
For detailed information on installing JanusServer on Debian, please refer to its official `repository <https://github.com/meetecho/janus-gateway?tab=readme-ov-file#dependencies/>`_.

View File

@@ -4,7 +4,7 @@
.. _Kafka: https://kafka.apache.org/
.. _redis: https://redis.io/
.. _mongodb: https://www.mongodb.com/
.. _api docs: https://godoc.org/github.com/cgrates/cgrates/apier
.. _api docs: https://pkg.go.dev/github.com/cgrates/cgrates/apier@master
.. _SQS: https://aws.amazon.com/de/sqs/
.. _AMQP: https://www.amqp.org/
.. _Asterisk: https://www.asterisk.org/
@@ -187,7 +187,7 @@ Links
- CGRateS home page `<http://www.cgrates.org>`_
- Documentation `<http://cgrates.readthedocs.io>`_
- API docs `<https://godoc.org/github.com/cgrates/cgrates/apier>`_
- API docs `<https://pkg.go.dev/github.com/cgrates/cgrates/apier@master>`_
- Source code `<https://github.com/cgrates/cgrates>`_
- Travis CI `<https://travis-ci.org/cgrates/cgrates>`_
- Google group `<https://groups.google.com/forum/#!forum/cgrates>`_
@@ -197,4 +197,4 @@ Links
License
-------
`CGRateS`_ is released under the terms of the `[GNU GENERAL PUBLIC LICENSE Version 3] <http://www.gnu.org/licenses/gpl-3.0.en.html>`_.
`CGRateS`_ is released under the terms of the `[GNU GENERAL PUBLIC LICENSE Version 3] <http://www.gnu.org/licenses/gpl-3.0.en.html>`_.

105
docs/prometheus.rst Normal file
View File

@@ -0,0 +1,105 @@
.. _prometheus_agent:
PrometheusAgent
===============
**PrometheusAgent** is a CGRateS component that exposes metrics for Prometheus monitoring systems. It serves as a bridge between CGRateS and Prometheus by collecting and exposing metrics from:
1. **Core metrics** - collected from configured CGRateS engines via CoreSv1.Status API
2. **StatQueue metrics** - values from CGRateS :ref:`StatS <stats>` component, collected via StatSv1.GetQueueFloatMetrics API
For core metrics, the agent computes real-time values on each Prometheus scrape request. For StatQueue metrics, it retrieves the current state of the stored StatQueues without additional calculations.
Configuration
-------------
Example configuration in the JSON file:
.. code-block:: json
"prometheus_agent": {
"enabled": true,
"path": "/prometheus",
"cores_conns": ["*internal", "external"],
"stats_conns": ["*internal", "external"],
"stat_queue_ids": ["cgrates.org:SQ_1", "SQ_2"]
}
The default configuration can be found in the :ref:`configuration` section.
Parameters
----------
enabled
Enable the PrometheusAgent module. Possible values: <true|false>
path
HTTP endpoint path where Prometheus metrics will be exposed, e.g., "/prometheus" or "/metrics"
cores_conns
List of connection IDs to CoreS components for collecting core metrics. Empty list disables core metrics collection. Possible values: <""|*internal|$rpc_conns_id>
stats_conns
List of connection IDs to StatS components for collecting StatQueue metrics. Empty list disables StatQueue metrics collection. Possible values: <""|*internal|$rpc_conns_id>
stat_queue_ids
List of StatQueue IDs to collect metrics from. Can include tenant in format <[tenant]:ID>. If tenant is not specified, default tenant from general configuration is used.
Available Metrics
-----------------
The PrometheusAgent exposes the following metrics:
1. **StatQueue Metrics**
- Uses the naming format ``cgrates_stats_metrics`` with labels for tenant, queue, and metric type
- Obtained from StatS services on each scrape request
Example of StatQueue metrics output:
.. code-block:: none
# HELP cgrates_stats_metrics Current values for StatQueue metrics
# TYPE cgrates_stats_metrics gauge
cgrates_stats_metrics{metric="*acc",queue="SQ_1",tenant="cgrates.org"} 7.73779
cgrates_stats_metrics{metric="*tcc",queue="SQ_1",tenant="cgrates.org"} 23.21337
cgrates_stats_metrics{metric="*acc",queue="SQ_2",tenant="cgrates.org"} 11.34716
cgrates_stats_metrics{metric="*tcc",queue="SQ_2",tenant="cgrates.org"} 34.04147
.. note::
StatQueue metrics don't include node_id labels since StatQueues can be shared between CGRateS instances. Users should ensure StatQueue IDs are unique across their environment.
2. **Core Metrics** (when cores_conns is configured)
- Standard Go runtime metrics (go_goroutines, go_memstats_*, etc.)
- Standard process metrics (process_cpu_seconds_total, process_open_fds, etc.)
- Node identification via "node_id" label, allowing multiple CGRateS engines to be monitored
Example of core metrics output:
.. code-block:: none
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines{node_id="e94160b"} 40
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total{node_id="e94160b"} 0.34
# HELP go_memstats_alloc_bytes Number of bytes allocated in heap and currently in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes{node_id="e94160b"} 1.1360808e+07
How It Works
------------
The PrometheusAgent operates differently than other CGRateS components that use connection failover:
- When multiple connections are configured in stats_conns, the agent collects metrics from **all** connections, not just the first available one
- When multiple connections are configured in cores_conns, the agent attempts to collect metrics from **all** connections, labeling them with their respective node_id
- The agent processes metrics requests only when Prometheus sends a scrape request to the configured HTTP endpoint
You can view all exported metrics and see what Prometheus would scrape by making a simple curl request to the HTTP endpoint:
.. code-block:: bash
curl http://localhost:2080/prometheus

View File

@@ -4,7 +4,7 @@ RALs
====
**RALs** is a standalone subsystem within **CGRateS** designed to handle two major tasks: :ref:`Rating` and :ref:`Accounting`. It is accessed via `CGRateS RPC APIs <https://godoc.org/github.com/cgrates/cgrates/apier/>`_.
**RALs** is a standalone subsystem within **CGRateS** designed to handle two major tasks: :ref:`Rating` and :ref:`Accounting`. It is accessed via `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_.
@@ -51,7 +51,7 @@ FallbackSubjects
RatingPlan
^^^^^^^^^^
Groups together rates per destination. Configured via the following parameters:
Groups together rates per destination and relates them to event timing. Configured via the following parameters:
ID
The tag uniquely idenfying each RatingPlan. There can be multiple entries grouped by the same ID.
@@ -59,8 +59,11 @@ ID
DestinationRatesID
The identifier of the :ref:`DestinationRate` set.
TimingID
The itentifier of the :ref:`Timing` profile.
Weight
Priority of matching rule (*DestinationRatesID*). Higher value equals higher priority.
Priority of matching rule (*DestinationRatesID*+*TimingID*). Higher value equals higher priority.
.. _DestinationRate:
@@ -147,6 +150,36 @@ GroupIntervalStart
Activates the rate at specific usage within the event.
.. _Timing:
Timing
^^^^^^
A *Timing* profile is giving time awarness to an event. Configured via the following parameters:
ID
The tag uniquely idenfying each *Timing* profile.
Years
List of years to match within the event. Defaults to the catch-all meta: *\*any*.
Months
List of months to match within the event. Defaults to the catch-all meta: *\*any*.
MonthDays
List of month days to match within the event. Defaults to the catch-all meta: *\*any*.
WeekDays
List of week days to match within the event as integer values. Special case for *Sunday* which matches for both 0 and 7.
Time
The exact time to match (mostly as time start). Defined in the format: *hh:mm:ss*
.. Note:: Due to optimization, CGRateS encapsulates and stores the rating information into just three objects: *Destinations*, *RatingProfiles* and *RatingPlan* (composed out of *RatingPlan*, *DestinationRate*, *Rate* and *Timing* objects).
.. _Accounting:
@@ -210,10 +243,10 @@ The following *BalanceTypes* are supported:
Coupled with MMS events, represents number of MMS units.
\*generic
Matching all types of events after specific ones, represents generic units (ie: for each x *voice minutes, y *sms units, z *data units will have )
Matching all types of events after specific ones, representing generic units (i.e., for each x \*voice minutes, y \*sms units, and z \*data units will have their respective usage)
\*monetary
Matching all types of events after specific ones, represents monetary units (can be interpreted as virtual currency).
Matching all types of events after specific ones, representing monetary units (can be interpreted as virtual currency).
@@ -254,10 +287,13 @@ Categories
SharedGroup
Pointing towards a shared balance ID.
TimingIDs
List of :ref:`Timing` profiles this *Balance* will match for, considering event's *AnswerTime* field.
Disabled
Makes the *Balance* invisible to charging.
Factor
Factors
Used in case of of *\*generic* *BalanceType* to specify the conversion factors for different type of events.
Blocker
@@ -345,7 +381,7 @@ Action
Actions are routines executed on demand (ie. by one of the three subsystems: :ref:`SchedulerS`, :ref:`ThresholdS` or :ref:`ActionTriggers <ActionTrigger>`) or called by API by external scripts.
An *Action has the following parameters:
An \*Action has the following parameters:
ID
*ActionSet* identifier.
@@ -398,7 +434,7 @@ ActionType
**\*disable_account**
Set the :ref:`Account` *Disabled* flag.
**\*httpPost**
**\*http_post**
Post data over HTTP protocol to configured HTTP URL.
**\*http_post_async**
@@ -443,7 +479,7 @@ ActionType
**\*remove_expired**
Removes expired balances of type matching the filter.
**\*cdr_account**
**\*reset_account_cdr**
Creates the account out of last *CDR* saved in :ref:`StorDB` matching the account details in the filter. The *CDR* should contain *AccountSummary* within it's *CostDetails*.

166
docs/rankings.rst Normal file
View File

@@ -0,0 +1,166 @@
.. _RankingS:
RankingS
========
**RankingS** is a standalone subsystem part of the **CGRateS** infrastructure, designed to work as an extension of the :ref:`StatS`, by regularly querying it for a list of predefined StatProfiles and ordering them based on their metrics.
Complete interaction with **RankingS** is possible via `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_.
Due it's real-time nature, **RankingS** are designed towards high throughput being able to process thousands of queries per second. This is doable since each *Ranking* is a very light object, held in memory and eventually backed up in :ref:`DataDB`.
Processing logic
----------------
In order for **RankingS** to start querying the :ref:`StatS`, it will need to be *scheduled* to do that. Scheduling is being done using `Cron Expressions <https://en.wikipedia.org/wiki/Cron>`_.
Once *Cron Expressions* are defined within a *RankingProfile*, internal **Cron Scheduler** needs to be triggered. This can happen in two different ways:
Automatic Query Scheduling
The profiles needing querying will be inserted into **RankingS** :ref:`JSON configuration <configuration>`. By leaving any part of *ranking_id* or *tenat* empty, it will be interpreted as catch-all filter.
API Scheduling
The profiles needing querying will be sent inside arguments to the `RankingSv1.ScheduleQueries API call <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_.
Offline storage
---------------
Offline storage is optionally possible, by enabling profile *Stored* flag and configuring the *store_interval* inside :ref:`JSON configuration <configuration>`.
Ranking querying
----------------
In order to query a **Ranking** (ie: to be displayed in a web interface), one should use the `RankingSv1.GetRanking API call <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_ or `RankingSv1.GetRankingSummary API call <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`.
Ranking exporting
---------------
On each **Ranking** change, it will be possible to send a specially crafted *RankingSummary* event to one of the following subsystems:
**ThresholdS**
Sending the **RankingUpdate** Event gives the administrator the possiblity to react to *Ranking* changes, including escalation strategies offered by the **TresholdS** paramters.
Fine tuning parameters (ie. selecting only specific ThresholdProfiles to increase speed) are available directly within the **TrendProfile**.
**EEs**
**EEs** makes it possible to export the **RankingUpdate** to all the availabe outside interfaces of **CGRateS**.
Both exporting options are enabled within :ref:`JSON configuration <configuration>`.
Parameters
----------
RankingS
^^^^^^
**RankingS** is the **CGRateS** service component responsible of generating the **Ranking** queries.
It is configured within **RankingS** section from :ref:`JSON configuration <configuration>` via the following parameters:
enabled
Will enable starting of the service. Possible values: <true|false>.
store_interval
Time interval for backing up the RankingS into *DataDB*. 0 To completely disable the functionality, -1 to enable synchronous backup. Anything higher than 0 will give the interval for asynchronous backups.
stats_conns
List of connections where we will query the stats.
scheduled_ids
Limit the RankingProfiles generating queries towards **StatS**. Empty to enable all available RankingProfiles or just tenants for all the available profiles on a tenant.
thresholds_conns
Connection IDs towards *ThresholdS* component. If not defined, there will be no notifications sent to *ThresholdS* on *Trend* changes.
ees_conns
Connection IDs towards the *EEs* component. If left empty, no exports will be performed on *Trend* changes.
ees_exporter_ids
Limit the exporter profiles executed on *Ranking* changes.
RankingProfile
^^^^^^^^^^^^^^
Ís made of the following fields:
Tenant
The tenant on the platform (one can see the tenant as partition ID).
ID
Identifier for the *RankingProfile*, unique within a *Tenant*.
Schedule
Cron expression scheduling gathering of the metrics.
StatIDs
List of **StatS** instances to query.
MetricIDs
Limit the list of metrics from the stats instance queried.
Sorting
Sorting strategy for the StatIDs. Possible values:
\*asc
Sort the StatIDs ascendent based on list of MetricIDs provided in SortParameters. One or more MetricIDs can be specified in hte SortingParameters for the cases when one level sort is not enough to differentiate them. If all metrics will be equal, a random sort will be applied.
\*desc
Sort the StatIDs descendat based on list of MetricIDs provided in SortParameters. One or more MetricIDs can be specified in hte SortingParameters for the cases when one level sort is not enough to differentiate them. If all metrics will be equal, a random sort will be applied.
SortingParameters
List of sorting parameters. For the current sorting strategies (\*asc/\*desc) there will be one or more MetricIDs defined.
Metric can be defined in compressed mode (ie. ["Metric1","Metric2"]) or extended mode (ie: ["Metric1:true", "Metric2:false"]) where *false* will reverse the sorting logic for that particular metric (ie: ["\*tcc:true","\*pdd:false"] with \*desc sorting strategy).
Stored
Enable storing of this *Ranking* intance for persistence.
ThresholdIDs
Limit *TresholdProfiles* processing the *RankingUpdate* for this *RankingProfile*.
Ranking
^^^^^^^
instance is made out of the following fields:
Tenant
The tenant on the platform (one can see the tenant as partition ID).
ID
Unique *Ranking* identifier on a *Tenant*.
LastUpdate
Time of the last Metrics update.
Metrics
Stat Metrics and their values at the query time.
Sorting
Archived sorting strategy from the profile.
SortingParameters
Archived list of sorted parameters from the profile.
SortedStatIDs
List of queried stats, sorted based on sorting strategy and parameters.
Use cases
---------
* Ranking computation for commercial and monitoring applications.
* Revenue assurance applications.
* Fraud detection by ranking specific billing metrics during sensitive time intervals (\*acc, \*tcc, \*tcd).
* Building call patterns.
* Building statistical information to train systems capable of artificial intelligence.
* Building quality metrics used in traffic routing.

4
docs/requirements.in Normal file
View File

@@ -0,0 +1,4 @@
sphinx_rtd_theme==1.2.0
sphinx==6.2.1
sphinx_copybutton==0.5.2
sphinx_tabs==3.4.1

View File

@@ -1,9 +1,70 @@
mock==1.0.1
pillow==5.4.1
alabaster>=0.7,<0.8,!=0.7.5
commonmark==0.8.1
recommonmark==0.5.0
sphinx<2
sphinx-rtd-theme<0.5
readthedocs-sphinx-ext<2.2
docutils>=0.14,<0.18
#
# This file is autogenerated by pip-compile with Python 3.9
# by the following command:
#
# pip-compile --output-file=requirements.txt --resolver=backtracking requirements.in
#
alabaster==0.7.13
# via sphinx
babel==2.12.1
# via sphinx
certifi==2023.5.7
# via requests
charset-normalizer==3.1.0
# via requests
docutils==0.18.1
# via
# sphinx
# sphinx-rtd-theme
# sphinx-tabs
idna==3.4
# via requests
imagesize==1.4.1
# via sphinx
importlib-metadata==6.6.0
# via sphinx
jinja2==3.1.2
# via sphinx
markupsafe==2.1.2
# via jinja2
packaging==23.1
# via sphinx
pygments==2.15.1
# via
# sphinx
# sphinx-tabs
requests==2.30.0
# via sphinx
snowballstemmer==2.2.0
# via sphinx
sphinx==6.2.1
# via
# -r requirements.in
# sphinx-copybutton
# sphinx-rtd-theme
# sphinx-tabs
# sphinxcontrib-jquery
sphinx-copybutton==0.5.2
# via -r requirements.in
sphinx-rtd-theme==1.2.0
# via -r requirements.in
sphinx-tabs==3.4.1
# via -r requirements.in
sphinxcontrib-applehelp==1.0.4
# via sphinx
sphinxcontrib-devhelp==1.0.2
# via sphinx
sphinxcontrib-htmlhelp==2.0.1
# via sphinx
sphinxcontrib-jquery==4.1
# via sphinx-rtd-theme
sphinxcontrib-jsmath==1.0.1
# via sphinx
sphinxcontrib-qthelp==1.0.3
# via sphinx
sphinxcontrib-serializinghtml==1.1.5
# via sphinx
urllib3==2.0.2
# via requests
zipp==3.15.0
# via importlib-metadata

View File

@@ -6,7 +6,7 @@ ResourceS
**ResourceS** is a standalone subsystem part of the **CGRateS** infrastructure, designed to allocate virtual resources for the generic *Events* (hashmaps) it receives.
Both receiving of *Events* as well as operational commands on the virtual resources is performed via a complete set of `CGRateS RPC APIs <https://godoc.org/github.com/cgrates/cgrates/apier/>`_.
Both receiving of *Events* as well as operational commands on the virtual resources is performed via a complete set of `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_.
Due it's real-time nature, **ResourceS** are designed towards high throughput being able to process thousands of *Events* per second. This is doable since each *Resource* is a very light object, held in memory and eventually backed up in *DataDB*.
@@ -46,7 +46,7 @@ nested_fields
ResourceProfile
^^^^^^^^^^^^^^^
The **ResourceProfile** is the configuration of a *Resource*. This will be performed over `CGRateS RPC APIs <https://godoc.org/github.com/cgrates/cgrates/apier/>`_ or *.csv* files. A profile is comprised out of the following parameters:
The **ResourceProfile** is the configuration of a *Resource*. This will be performed over `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_ or *.csv* files. A profile is comprised out of the following parameters:
Tenant
The tenant on the platform (one can see the tenant as partition ID).
@@ -57,6 +57,9 @@ ID
FilterIDs
List of *FilterProfiles* which should match in order to consider the *ResourceProfile* matching the event.
ActivationInterval
The time interval when this profile becomes active. If undefined, the profile is always active. Other options are start time, end time or both.
UsageTTL
Autoexpire resource allocation after this time duration.

View File

@@ -10,7 +10,7 @@ RouteS
=========
**RouteS** is a standalone subsystem within **CGRateS** responsible to compute a list of routes which can be used for a specific event received to process. It is accessed via `CGRateS RPC APIs <https://godoc.org/github.com/cgrates/cgrates/apier/>`_.
**RouteS** is a standalone subsystem within **CGRateS** responsible to compute a list of routes which can be used for a specific event received to process. It is accessed via `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_.
As most of the other subsystems, it is performance oriented, stored inside *DataDB* but cached inside the *cgr-engine* process.
Caching can be done dynamically/on-demand or at start-time/precached and it is configurable within *cache* section in the :ref:`JSON configuration <configuration>`.
@@ -73,13 +73,13 @@ attributes_conns
Connections to AttributeS for altering events before supplier queries. If undefined, fields modifications are disabled.
resources_conns
Connections to ResourceS for *res sorting, empty to disable functionality.
Connections to ResourceS for \*res sorting, empty to disable functionality.
stats_conns
Connections to StatS for *stats sorting, empty to disable stats functionality.
Connections to StatS for \*stats sorting, empty to disable stats functionality.
default_ratio
Default ratio used in case of *load strategy
Default ratio used in case of \*load strategy
.. _SupplierProfile:
@@ -98,6 +98,9 @@ ID
FilterIDs
List of *FilterProfileIDs* which should match in order to consider the profile matching the event.
ActivationInterval
The time interval when this profile becomes active. If undefined, the profile is always active. Other options are start time, end time or both.
Sorting
Sorting strategy applied when ordering the individual *Routes* defined bellow. Possible values are:
@@ -156,19 +159,19 @@ FilterIDs
List of *FilterProfileIDs* which should match in order to consider the *Supplier* in use/active.
AccountIDs
List of account IDs which should be checked in case of some strategies (ie: *lc, *hc).
List of account IDs which should be checked in case of some strategies (ie: \*lc, \*hc).
RatingPlanIDs
List of RatingPlanIDs which should be checked in case of some strategies (ie: *lc, *hc).
List of RatingPlanIDs which should be checked in case of some strategies (ie: \*lc, \*hc).
ResourceIDs
List of ResourceIDs which should be checked in case of some strategies (ie: *reas or *reds).
List of ResourceIDs which should be checked in case of some strategies (ie: \*reas or \*reds).
StatIDs
List of StatIDs which should be checked in case of some strategies (ie: *qos or *load). Can also be defined as *StatID:MetricID*.
List of StatIDs which should be checked in case of some strategies (ie: \*qos or \*load). Can also be defined as *StatID:MetricID*.
Weight
Used for sorting in some strategies (ie: *weight, *lc or *hc).
Used for sorting in some strategies (ie: \*weight, \*lc or \*hc).
Blocker
No more routes are provided after this one.

161
docs/rpcconns.rst Normal file
View File

@@ -0,0 +1,161 @@
.. _rpc_conns:
RPCConns
========
**RPCConns** defines connection pools used by CGRateS components for inter-service communication. These pools enable services to interact both within a single CGRateS instance or across multiple instances.
Configuration Structure
-----------------------
Example configuration in the JSON file:
.. code-block:: json
{
"rpc_conns": {
"conn1": {
"strategy": "*first",
"pool_size": 0,
"conns": [{
"address": "192.168.122.210:2012",
"transport": "*json",
"connect_attempts": 5,
"reconnects": -1,
"connect_timeout": "1s",
"reply_timeout": "2s"
}]
}
}
}
Predefined Connection Pools
---------------------------
\*internal
Direct in-process communication
\*birpc_internal
Bidirectional in-process communication
\*localhost
JSON-RPC connection to local cgr-engine on port 2012
\*bijson_localhost
Bidirectional JSON-RPC connection to local cgr-engine on port 2014
Bidirectional Communication with SessionS
-----------------------------------------
Bidirectional connections are specifically designed and used for communication between agents and the :ref:`SessionS <sessions>` component. While agents can send requests using standard connections, bidirectional connections are necessary when SessionS needs to communicate back to the agents.
When using bidirectional connections, SessionS maintains references to all connected agents, allowing it to send requests back to specific agents when needed (for example, to force disconnect a session or query active sessions).
.. note::
Bidirectional connections (``*birpc_internal``, ``*birpc_json``, ``*birpc_gob``) are exclusively used for Agent-SessionS communication. All other service interactions use standard one-way connections.
Parameters
----------
Pool Parameters
^^^^^^^^^^^^^^^
Strategy
Controls connection selection within the pool. Possible values:
* ``*first``: Uses first available connection, fails over on network/timeout/missing service errors
* ``*next``: Round-robin between connections with same failover as ``*first``
* ``*random``: Random connection selection with same failover as ``*first``
* ``*first_positive``: Tries connections in order until getting any successful response
* ``*first_positive_async``: Async version of ``*first_positive``
* ``*broadcast``: Sends to all connections, returns first successful response
* ``*broadcast_sync``: Sends to all, waits for completion, logs errors that wouldn't trigger failover in ``*first``
* ``*broadcast_async``: Sends to all without waiting for responses
* ``*parallel``: Pool that creates and reuses connections up to a limit
.. note::
Connections attempt failover to the next available connection in the pool on connection errors, timeouts, or service errors. Service errors (usually referring to "can't find service" errors) occur when attempting to reach services that are either temporarily unavailable during engine initialization or disabled in that particular instance.
PoolSize
Sets the connection limit for ``*parallel`` strategy (0 means unlimited)
Connection Parameters
^^^^^^^^^^^^^^^^^^^^^
Address
Network address, ``*internal``, or ``*birpc_internal``
Transport
Protocol (``*json``, ``*gob``, ``*birpc_json``, ``*birpc_gob``, ``*http_jsonrpc``). When using ``*internal`` or ``*birpc_internal`` addresses, defaults to the address value. Otherwise defaults to ``*gob``.
ConnectAttempts
Number of initial connection attempts
Reconnects
Max number of reconnection attempts (-1 for infinite)
MaxReconnectInterval
Maximum delay between reconnects
ConnectTimeout
Connection timeout (e.g., "1s")
ReplyTimeout
Response timeout (e.g., "2s")
TLS
Enable TLS encryption
ClientKey
Path to TLS client key file
ClientCertificate
Path to TLS client certificate
CaCertificate
Path to CA certificate
Transport Performance
---------------------
\*internal, \*birpc_internal
In-process communication (by far the fastest)
\*gob, \*birpc_gob
Binary protocol that provides better performance at the cost of being harder to troubleshoot
\*json, \*birpc_json
Standard JSON protocol - slower but easier to debug since you can read the traffic
\*http_jsonrpc
HTTP-based JSON-RPC protocol - slower than direct JSON-RPC due to HTTP overhead, but can integrate with web infrastructure and provides easy debugging through standard HTTP tools
.. note::
While the "transport" parameter name is used in the configuration, it actually specifies the codec (*json, *gob) used for data encoding. All network connections use TCP, while internal ones skip networking completely.
Using Connection Pools
----------------------
Components reference connection pools through "_conns" configuration fields:
.. code-block:: json
{
"cdrs": {
"enabled": true,
"rals_conns": ["*internal"],
"ees_conns": ["conn1"]
}
}
This configuration approach allows:
* Deploying services across single or multiple instances
* Selecting transports based on performance requirements
* Automatic failover between connections

View File

@@ -4,7 +4,7 @@ SessionS
========
**SessionS** is a standalone subsystem within **CGRateS** responsible to manage virtual sessions based on events received. It is accessed via `CGRateS RPC APIs <https://godoc.org/github.com/cgrates/cgrates/apier/>`_.
**SessionS** is a standalone subsystem within **CGRateS** responsible to manage virtual sessions based on events received. It is accessed via `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_.
Parameters
@@ -341,7 +341,7 @@ Instead of arguments, the options for enabling various functionaity will come in
\*routes
Process the event with :ref:`Routes`. Auxiliary flags available:
**\*ignoreErrors**
**\*ignore_errors**
Ignore the routes with errors instead of failing the request completely.
**\*event_cost**

View File

@@ -6,7 +6,7 @@ StatS
**StatS** is a standalone subsystem part of the **CGRateS** infrastructure, designed to aggregate and calculate statistical metrics for the generic *Events* (hashmaps) it receives.
Both receiving of *Events* as well as *Metrics* displaying is performed via a complete set of `CGRateS RPC APIs <https://godoc.org/github.com/cgrates/cgrates/apier/>`_.
Both receiving of *Events* as well as *Metrics* displaying is performed via a complete set of `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_.
Due it's real-time nature, **StatS** are designed towards high throughput being able to process thousands of *Events* per second. This is doable since each *StatQueue* is a very light object, held in memory and eventually backed up in *DataDB*.
@@ -73,6 +73,9 @@ ID
FilterIDs
List of *FilterProfileIDs* which should match in order to consider the profile matching the event.
ActivationInterval
The time interval when this profile becomes active. If undefined, the profile is always active. Other options are start time, end time or both.
QueueLength
Maximum number of items stored in the queue. Once the *QueueLength* is reached, new items entering will cause oldest one to be dropped (FIFO mode).
@@ -137,7 +140,7 @@ Use cases
* Aggregate various traffic metrics for traffic transparency.
* Revenue assurance applications.
* Fraud detection by aggregating specific billing metrics during sensitive time intervals (*acc, *tcc, *tcd).
* Fraud detection by aggregating specific billing metrics during sensitive time intervals (\*acc, \*tcc, \*tcd).
* Building call patterns.
* Building statistical information to train systems capable of artificial intelligence.
* Building quality metrics used in traffic routing.

View File

@@ -4,7 +4,7 @@ ThresholdS
==========
**ThresholdS** is a standalone subsystem within **CGRateS** responsible to execute a list of *Actions* for a specific event received to process. It is accessed via `CGRateS RPC APIs <https://godoc.org/github.com/cgrates/cgrates/apier/>`_.
**ThresholdS** is a standalone subsystem within **CGRateS** responsible to execute a list of *Actions* for a specific event received to process. It is accessed via `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_.
As most of the other subsystems, it is performance oriented, stored inside *DataDB* but cached inside the *cgr-engine* process.
Caching can be done dynamically/on-demand or at start-time/precached and it is configurable within *cache* section in the :ref:`JSON configuration <configuration>`.
@@ -93,6 +93,9 @@ ID
FilterIDs
List of *FilterProfileIDs* which should match in order to consider the profile matching the event.
ActivationInterval
The time interval when this profile becomes active. If undefined, the profile is always active. Other options are start time, end time or both.
MaxHits
Limit number of hits for this threshold. Once this is reached, the threshold is considered disabled.
@@ -108,8 +111,8 @@ Blocker
Weight
Sorts the execution of multiple thresholds matching the event. The higher the *Weight* is, the higher the priority to be executed.
ActionProfileIDs
List of *ActionProfiles* to execute for this threshold.
ActionIDs
List of *Actions* to execute for this threshold.
Async
If true, do not wait for actions to complete.

168
docs/trends.rst Normal file
View File

@@ -0,0 +1,168 @@
.. _trends:
TrendS
=====
**TrendS** is a standalone subsystem part of the **CGRateS** infrastructure, designed to work as an extension of the :ref:`StatS`, by regularly querying it, storing it's values in a time-series-like database and calculate trend percentages based on their evolution.
Complete interaction with **TrendS** is possible via `CGRateS RPC APIs <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_.
Due it's real-time nature, **TrendS** are designed towards high throughput being able to process thousands of queries per second. This is doable since each *Trend* is a very light object, held in memory and eventually backed up in :ref:`DataDB`.
Processing logic
----------------
In order for **TrendS** to start querying the :ref:`StatS`, it will need to be *scheduled* to do that. Scheduling is being done using `Cron Expressions <https://en.wikipedia.org/wiki/Cron>`_.
Once *Cron Expressions* are defined within a *TrendProfile*, internal **Cron Scheduler** needs to be triggered. This can happen in two different ways:
Automatic Query Scheduling
The profiles needing querying will be inserted into **trends** :ref:`JSON configuration <configuration>`. By leaving any part of *trend_id* or *tenat* empty, it will be interpreted as catch-all filter.
API Scheduling
The profiles needing querying will be sent inside arguments to the `TrendSv1.ScheduleQueries API call <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_.
Offline storage
---------------
Offline storage is optionally possible, by enabling profile *Stored* flag and configuring the *store_interval* inside :ref:`JSON configuration <configuration>`.
Trend querying
--------------
In order to query a **Trend** (ie: to be displayed in a web interface), one should use the `TrendSv1.GetTrend API call <https://pkg.go.dev/github.com/cgrates/cgrates/apier@master/>`_ which also offers pagination parameters.
Trend exporting
---------------
On each **Trend** change, it will be possible to send a specially crafted event, *TrendUpdate* to one of the following subsystems:
**ThresholdS**
Sending the **TrendUpdate** Event gives the administrator the possiblity to react to *Trend* changes, including escalation strategies offered by the **TresholdS** paramters.
Fine tuning parameters (ie. selecting only specific ThresholdProfiles to increase speed1) are available directly within the **TrendProfile**.
**EEs**
**EEs** makes it possible to export the **TrendUpdate** to all the availabe outside interfaces of **CGRateS**.
Both exporting options are enabled within :ref:`JSON configuration <configuration>`.
Parameters
----------
TrendS
^^^^^^
**TrendS** is the **CGRateS** component responsible of generating the **Trend** queries.
It is configured within **trends** section from :ref:`JSON configuration <configuration>` via the following parameters:
enabled
Will enable starting of the service. Possible values: <true|false>.
store_interval
Time interval for backing up the trends into *DataDB*. 0 To completely disable the functionality, -1 to enable synchronous backup. Anything higher than 0 will give the interval for asynchronous backups.
stats_conns
List of connections where we will query the stats.
scheduled_ids
Limit the TrendProfiles queried. Empty to query all the available TrendProfiles or just tenants for all the available profiles on a tenant.
thresholds_conns
Connection IDs towards *ThresholdS* component. If not defined, there will be no notifications sent to *ThresholdS* on *Trend* changes.
ees_conns
Connection IDs towards the *EEs* component. If left empty, no exports will be performed on *Trend* changes.
ees_exporter_ids
Limit the exporter profiles executed on *Trend* changes.
TrendProfile
^^^^^^^^^^^^
Ís made of the following fields:
Tenant
The tenant on the platform (one can see the tenant as partition ID).
ID
Identifier for the *TrendProfile*, unique within a *Tenant*.
Schedule
Cron expression scheduling gathering of the metrics.
StatID
StatS identifier which will be queried.
Metrics
Limit the list of metrics from the stats instance queried.
TTL
Automatic cleanup of the queried values from inside *Trend Metrics*.
QueueLength
Limit the size of *Trend Metrics*. Older values will be removed first.
MinItems
Issue *TrendUpdate* events to external subsystems only if MinItems are reched to limit false alarms.
CorrelationType
The correlation strategy to use when computing the trend. *\*average* will consider all previous query values and *\*last* only the last one.
Tolerance
Allow a deviation of the values when computin the trend. This is defined as percentage of increase/decrease.
Stored
Enable storing of this *Trend* for persistence.
ThresholdIDs
Limit *TresholdProfiles* processing the *TrendUpdate* for this *TrendProfile*.
Trend
^^^^^
is made out of the following fields:
Tenant
The tenant on the platform (one can see the tenant as partition ID).
ID
Unique *Trend* identifier on a *Tenant*
RunTimes
Times when the stat queries were ran by the scheduler
Metrics
History of the queried metrics, indexed by the query time. One query stores the following values:
ID
Metric ID on the *StatS* side
Value
Value of the metric at the time of query
TrendGrowth
Computed trend growth for the metric values, stored in percentage numbers.
TrendLabel
Computed trend label for the metric values. Possible values are: *positive, *negative, *constant, N/A.
Use cases
---------
* Aggregate various traffic metrics for traffic transparency.
* Revenue assurance applications.
* Fraud detection by aggregating specific billing metrics during sensitive time intervals (\*acc, \*tcc, \*tcd).
* Building call patterns.
* Building statistical information to train systems capable of artificial intelligence.
* Building quality metrics used in traffic routing.

308
docs/troubleshooting.rst Normal file
View File

@@ -0,0 +1,308 @@
Troubleshooting
===============
.. contents::
:local:
:depth: 3
Profiling
---------
This section covers how to set up and use profiling tools for ``cgrates`` to diagnose performance issues. You can enable profiling through configuration, runtime flags, or APIs.
For more information on profiling in Go, see the `Go Diagnostics: Profiling <https://go.dev/doc/diagnostics#profiling>`_ documentation.
Configuration
~~~~~~~~~~~~~
There are two ways to set up profiling:
Using JSON Configuration
^^^^^^^^^^^^^^^^^^^^^^^^
Enable profiling by adding the ``pprof_path`` under the ``http`` section in your JSON config file:
.. code-block:: json
{
"listen": {
"http": ":2080",
"http_tls": ":2280"
},
"http": {
"pprof_path": "/debug/pprof/"
}
}
Profiling is enabled by default and exposes the ``/debug/pprof/`` endpoint. You can access it through the address set in the ``listen`` section (``http`` or ``http_tls``). To turn off profiling, set ``pprof_path`` to an empty string ``""``.
Using Runtime Flags
^^^^^^^^^^^^^^^^^^^
You can also control profiling with runtime flags when starting ``cgr-engine``:
.. code-block:: console
$ cgr-engine -help
Usage of cgr-engine:
-cpuprof_dir string
Directory for CPU profiles
-memprof_dir string
Directory for memory profiles
-memprof_interval duration
Interval between memory profile saves (default 15s)
-memprof_maxfiles int
Number of memory profiles to keep (most recent) (default 1)
-memprof_timestamp
Add timestamp to memory profile files
Generating Profile Data
~~~~~~~~~~~~~~~~~~~~~~~
Let's assume the profiling interface is available at ``http://localhost:2080/debug/pprof/``.
.. note::
Profiling started with flags or APIs can be stopped using the corresponding API calls. If you start profiling on startup using flags and don't stop it manually, a profile will be automatically generated when the engine shuts down. The same applies if you start profiling via API and forget to stop it before shutting down the engine.
CPU Profiling
^^^^^^^^^^^^^
Here's how to generate CPU profile data:
- **Web Browser**: Go to ``http://localhost:2080/debug/pprof/`` in your browser. Click "profile" to start a 30-second CPU profile.
- **Custom Duration**: Add the ``seconds`` parameter to set a different duration: ``http://localhost:2080/debug/pprof/profile?seconds=5``.
- **Command Line**: Use ``curl`` to download the profile:
.. code-block:: console
curl -o cpu.prof http://localhost:2080/debug/pprof/profile?seconds=5
- **APIs**: Use ``CoreSv1.StartCPUProfiling`` and ``CoreSv1.StopCPUProfiling`` APIs:
.. code-block:: json
{
"method": "CoreSv1.StartCPUProfiling",
"params": [{
"DirPath": "/tmp"
}],
"id": 1
}
{
"method": "CoreSv1.StopCPUProfiling",
"params": [],
"id": 1
}
- **Startup Profiling**: Profile the entire runtime by specifying a directory with the ``-cpuprof_dir`` flag:
.. code-block:: console
cgr-engine -cpuprof_dir=/tmp [other flags]
Memory Profiling
^^^^^^^^^^^^^^^^
Generate memory profile data like this:
- **Web Browser**: Visit ``http://localhost:2080/debug/pprof/`` to create a memory snapshot. Use ``?debug=2`` (or ``?debug=1``) for human-readable output. If the ``debug`` parameter is omitted or set to ``0``, the output will be in binary format.
- **Command Line**: Use ``curl`` to download the memory profile:
.. code-block:: console
curl -o mem.prof http://localhost:2080/debug/pprof/heap
- **Automated Profiling**: Use the ``CoreSv1.StartMemoryProfiling`` API for periodic memory snapshots:
.. code-block:: json
{
"method": "CoreSv1.StartMemoryProfiling",
"params": [{
"DirPath": "/tmp",
"Interval": 5000000000,
"MaxFiles": 5,
"UseTimestamp": true
}],
"id": 1
}
.. note::
``Interval`` is in nanoseconds. Future updates will allow using time strings (e.g., ``5s``, ``1h``) or seconds as an integer.
Other Useful Profiles
^^^^^^^^^^^^^^^^^^^^^
The ``/debug/pprof/`` endpoint offers more useful profiles:
- **Goroutine Profile** (``/debug/pprof/goroutine``): View or download goroutine stack dumps.
- **Mutex Profile** (``/debug/pprof/mutex``): Find bottlenecks where goroutines wait for locks.
- **Block Profile** (``/debug/pprof/block``): Identify where goroutines block waiting on synchronization primitives.
- **Thread Create Profile** (``/debug/pprof/threadcreate``): Show stack traces that led to the creation of new OS threads.
- **Execution Trace**: For information on generating and analyzing execution traces, see the `Tracing`_ section below.
Analyzing Profiles
~~~~~~~~~~~~~~~~~~
The main tool for analyzing profiles is ``go tool pprof``. It helps visualize and analyze profiling data. You can use it with both downloaded profile files and directly with URLs.
Command-Line Analysis
^^^^^^^^^^^^^^^^^^^^^
For CPU profiles:
.. code-block:: console
go tool pprof cpu.prof
# or
go tool pprof http://localhost:2080/debug/pprof/profile
For memory profiles:
.. code-block:: console
go tool pprof mem.prof
# or
go tool pprof http://localhost:2080/debug/pprof/heap
This opens an interactive terminal. Use commands like ``top``, ``list``, ``web``, and ``svg`` to explore the profile.
.. hint::
Run ``go tool pprof -h`` for more information on available commands and options.
Visual Analysis
^^^^^^^^^^^^^^^
Create visual representations of your profiling data:
- **SVG**: Generate an SVG graph:
.. code-block:: console
go tool pprof -svg cpu.prof > cpu.svg
# or
go tool pprof -svg mem.prof > mem.svg
- **Web Interface**: Use ``-http`` for an interactive visualization in your browser:
.. code-block:: console
go tool pprof -http=:8080 cpu.prof
# or
go tool pprof -http=:8080 mem.prof
.. note::
You might need to install the ``graphviz`` package.
Tracing
-------
Execution tracing provides a detailed view of runtime behavior of your Go program.
For detailed information on tracing in Go, see the `Go Diagnostics: Execution Tracing <https://go.dev/doc/diagnostics#tracing>`_ documentation.
To generate and analyze trace data:
.. code-block:: console
# Generate trace data
curl -o trace.out http://localhost:2080/debug/pprof/trace?seconds=5
# Analyze trace data
go tool trace trace.out
This opens a browser interface for detailed execution analysis.
Debugging
---------
This section covers how to set up and use Delve, a Go debugger, with ``cgrates``.
For detailed information on debugging Go programs, see the `Go Diagnostics: Debugging <https://go.dev/doc/diagnostics#debugging>`_ documentation.
Installation
~~~~~~~~~~~~
To install Delve, run:
.. code-block:: console
go install github.com/go-delve/delve/cmd/dlv@latest
Basic Usage
~~~~~~~~~~~
There are several ways to use Delve with ``cgrates``:
1. Start ``cgr-engine`` in debug mode:
.. code-block:: console
dlv exec /path/to/cgr-engine -- --config_path=/etc/cgrates --logger=*stdout
2. Attach to a running instance:
.. code-block:: console
ENGINE_PID=$(pidof cgr-engine)
dlv attach $ENGINE_PID
3. Debug tests:
.. code-block:: console
dlv test github.com/cgrates/cgrates/apier/v1 -- -test.run=TestName
.. hint::
For better debugging, disable optimizations (``-N``) and inlining (``-l``) when building ``cgr-engine``:
.. code-block:: console
go install -gcflags="all=-N -l" -ldflags "-X 'github.com/cgrates/cgrates/utils.GitLastLog=$GIT_LAST_LOG'" github.com/cgrates/cgrates/cmd/cgr-engine
Handling Crashes
~~~~~~~~~~~~~~~~
To capture more information when ``cgrates`` crashes:
1. Enable core dump generation:
.. code-block:: console
ulimit -c unlimited
GOTRACEBACK=crash cgr-engine -config_path=/etc/cgrates
2. Analyze core dumps with Delve:
.. code-block:: console
dlv core /path/to/cgr-engine core
Common Debugging Commands
~~~~~~~~~~~~~~~~~~~~~~~~~
Once in a Delve debug session, you can use these common commands:
- ``break`` or ``b``: Set a breakpoint
- ``continue`` or ``c``: Run until breakpoint or program termination
- ``next`` or ``n``: Step over to next line
- ``step`` or ``s``: Step into function call
- ``print`` or ``p``: Evaluate an expression
- ``goroutines``: List current goroutines
- ``help``: Show help for commands
For more information on using Delve, refer to the `Delve Documentation <https://github.com/go-delve/delve/tree/master/Documentation>`_.
Further Reading
---------------
For more comprehensive information on Go diagnostics, profiling, and debugging, check out these resources:
- `Go Diagnostics <https://go.dev/doc/diagnostics>`_: Official documentation on diagnostics in Go.
- `Profiling Go Programs <https://go.dev/blog/pprof>`_: In-depth blog post on profiling in Go.
- `net/http/pprof godoc <https://pkg.go.dev/net/http/pprof>`_: Documentation for the pprof package.
- `Delve Debugger <https://github.com/go-delve/delve>`_: GitHub repository for the Delve debugger.

515
docs/tutorial.rst Normal file
View File

@@ -0,0 +1,515 @@
Tutorial
========
.. contents::
:local:
:depth: 3
Introduction
------------
This tutorial provides detailed instructions for setting up a SIP Server and managing communication between the server and the CGRateS instance.
.. note::
The development and testing of the instructions in this tutorial has been done on a Debian 11 (Bullseye) virtual machine.
Scenario Overview
-----------------
The tutorial comprises the following steps:
1. **SIP Server Setup**:
Select and install a SIP Server. The tutorial supports the following options:
- FreeSWITCH_
- Asterisk_
- Kamailio_
- OpenSIPS_
2. **CGRateS Initialization**:
Launch a CGRateS instance with the corresponding agent configured. In this context, an "agent" refers to a component within CGRateS that manages communication between CGRateS and the SIP Servers.
3. **Account Configuration**:
Establish user accounts for different request types.
4. **Balance Addition**:
Allocate suitable balances to the user accounts.
5. **Call Simulation**:
Use Zoiper_ (or any other SIP UA of your choice) to register the user accounts and simulate calls between the configured accounts, and then verify the balance updates post-calls.
6. **Fraud Detection Setup**:
Implement a fraud detection mechanism to secure and maintain the integrity of the service.
As we progress through the tutorial, each step will be elaborated in detail. Let's embark on this journey with the SIP Server Setup.
Software Installation
---------------------
*CGRateS* already has a section within this documentation regarding installation. It can be found :ref:`here<installation>`.
Regarding the SIP Servers, click on the tab corresponding to the choice you made and follow the steps in order to set up:
.. tabs::
.. group-tab:: FreeSWITCH
For detailed information on installing FreeSWITCH_ on Debian, please refer to its official `installation wiki <https://developer.signalwire.com/freeswitch/FreeSWITCH-Explained/Installation/Linux/Debian_67240088/>`_.
Before installing FreeSWITCH_, you need to authenticate by creating a SignalWire Personal Access Token. To generate your personal token, follow the instructions in the `SignalWire official wiki on creating a personal token <https://developer.signalwire.com/freeswitch/freeswitch-explained/installation/howto-create-a-signalwire-personal-access-token_67240087/>`_.
To install FreeSWITCH_ and configure it, we have chosen the simplest method using *vanilla* packages.
.. code-block:: bash
TOKEN=YOURSIGNALWIRETOKEN # Insert your SignalWire Personal Access Token here
sudo apt-get update && apt-get install -y gnupg2 wget lsb-release
wget --http-user=signalwire --http-password=$TOKEN -O /usr/share/keyrings/signalwire-freeswitch-repo.gpg https://freeswitch.signalwire.com/repo/deb/debian-release/signalwire-freeswitch-repo.gpg
echo "machine freeswitch.signalwire.com login signalwire password $TOKEN" > /etc/apt/auth.conf
chmod 600 /etc/apt/auth.conf
echo "deb [signed-by=/usr/share/keyrings/signalwire-freeswitch-repo.gpg] https://freeswitch.signalwire.com/repo/deb/debian-release/ `lsb_release -sc` main" > /etc/apt/sources.list.d/freeswitch.list
echo "deb-src [signed-by=/usr/share/keyrings/signalwire-freeswitch-repo.gpg] https://freeswitch.signalwire.com/repo/deb/debian-release/ `lsb_release -sc` main" >> /etc/apt/sources.list.d/freeswitch.list
# If /etc/freeswitch does not exist, the standard vanilla configuration is deployed
sudo apt-get update && apt-get install -y freeswitch-meta-all
.. group-tab:: Asterisk
To install Asterisk_, follow these steps:
.. code-block:: bash
# Install the necessary dependencies
sudo apt-get install -y build-essential libasound2-dev autoconf \
openssl libssl-dev libxml2-dev \
libncurses5-dev uuid-dev sqlite3 \
libsqlite3-dev pkg-config libedit-dev \
libjansson-dev
# Download Asterisk
wget https://downloads.asterisk.org/pub/telephony/asterisk/asterisk-20-current.tar.gz -P /tmp
# Extract the downloaded archive
sudo tar -xzvf /tmp/asterisk-20-current.tar.gz -C /usr/src
# Change the working directory to the extracted Asterisk source
cd /usr/src/asterisk-20*/
# Compile and install Asterisk
sudo ./configure --with-jansson-bundled
sudo make menuselect.makeopts
sudo make
sudo make install
sudo make samples
sudo make config
sudo ldconfig
# Create the Asterisk system user
sudo adduser --quiet --system --group --disabled-password --shell /bin/false --gecos "Asterisk" asterisk
.. group-tab:: Kamailio
Kamailio_ can be installed using the commands below, as documented in the `Kamailio Debian Installation Guide <https://kamailio.org/docs/tutorials/devel/kamailio-install-guide-deb/>`_.
.. code-block:: bash
wget -O- http://deb.kamailio.org/kamailiodebkey.gpg | sudo apt-key add -
echo "deb http://deb.kamailio.org/kamailio57 bullseye main" > /etc/apt/sources.list.d/kamailio.list
sudo apt-get update
sudo apt-get install kamailio kamailio-extra-modules kamailio-json-modules
.. group-tab:: OpenSIPS
We got OpenSIPS_ installed via following commands:
.. code-block:: bash
curl https://apt.opensips.org/opensips-org.gpg -o /usr/share/keyrings/opensips-org.gpg
echo "deb [signed-by=/usr/share/keyrings/opensips-org.gpg] https://apt.opensips.org bookworm 3.4-releases" >/etc/apt/sources.list.d/opensips.list
echo "deb [signed-by=/usr/share/keyrings/opensips-org.gpg] https://apt.opensips.org bookworm cli-nightly" >/etc/apt/sources.list.d/opensips-cli.list
sudo apt-get update
sudo apt-get install opensips opensips-mysql-module opensips-cgrates-module opensips-cli
Configuration and initialization
--------------------------------
This section will be dedicated to configuring both the chosen SIP Server, as well as CGRateS and then get them running.
Regarding the SIP Servers, we have prepared custom configurations in advance, as well as an init scripts that can be used to start the services using said configurations. It can also be used to stop/restart/check on the status of the services. Another way to do that would be to copy the configuration in the default folder, where the Server will be searching for the configuration before starting, with it usually being /etc/<software name>.
.. tabs::
.. group-tab:: FreeSWITCH
The FreeSWITCH_ setup consists of:
- *vanilla* configuration + "mod_json_cdr" for CDR generation;
- configurations for the following users (found in *etc/freeswitch/directory/default*): 1001-prepaid, 1002-postpaid, 1003-pseudoprepaid, 1004-rated, 1006-prepaid, 1007-rated;
- addition of CGRateS' own extensions befoure routing towards users in the dialplan (found in *etc/freeswitch/dialplan/default.xml*).
To start FreeSWITCH_ with the prepared custom configuration, run:
.. code-block:: bash
sudo /usr/share/cgrates/tutorials/fs_evsock/freeswitch/etc/init.d/freeswitch start
To verify that FreeSWITCH_ is running, run the following command:
.. code-block:: bash
sudo fs_cli -x status
.. group-tab:: Asterisk
The Asterisk_ setup consists of:
- *basic-pbx* configuration sample;
- configurations for the following users: 1001-prepaid, 1002-postpaid, 1003-pseudoprepaid, 1004-rated, 1007-rated.
To start Asterisk_ with the prepared custom configuration, run:
.. code-block:: bash
sudo /usr/share/cgrates/tutorials/asterisk_ari/asterisk/etc/init.d/asterisk start
To verify that Asterisk_ is running, run the following commands:
.. code-block:: bash
sudo asterisk -r -s /tmp/cgr_asterisk_ari/asterisk/run/asterisk.ctl
ari show status
.. group-tab:: Kamailio
The Kamailio_ setup consists of:
- default configuration with small modifications to add **CGRateS** interaction;
- for script maintainability and simplicity, we have separated **CGRateS** specific routes in *kamailio-cgrates.cfg* file which is included in main *kamailio.cfg* via include directive;
- configurations for the following users: 1001-prepaid, 1002-postpaid, 1003-pseudoprepaid, stored using the CGRateS AttributeS subsystem.
To start Kamailio_ with the prepared custom configuration, run:
.. code-block:: bash
sudo /usr/share/cgrates/tutorials/kamevapi/kamailio/etc/init.d/kamailio start
To verify that Kamailio_ is running, run the following command:
.. code-block:: bash
sudo kamctl moni
.. group-tab:: OpenSIPS
The OpenSIPS_ setup consists of:
- *residential* configuration;
- user accounts configuration not needed since it's enough for them to only be defined within CGRateS;
- for simplicity, no authentication was configured (WARNING: Not suitable for production).
- creating database for the DRouting module, using the following command:
.. code-block:: bash
opensips-cli -x database create
After creating the database for DRouting module populate the tables with routing info:
.. code-block:: bash
insert into dr_gateways (gwid,type,address) values("gw2_1",0,"sip:127.0.0.1:5082");
insert into dr_gateways (gwid,type,address) values("gw1_1",0,"sip:127.0.0.1:5081");
insert into dr_carriers (carrierid,gwlist) values("route1","gw1_1");
insert into dr_carriers (carrierid,gwlist) values("route2","gw2_1");
To start OpenSIPS_ with the prepared custom configuration, run:
.. code-block:: bash
sudo mv /etc/opensips /etc/opensips.old
sudo cp -r /usr/share/cgrates/tutorials/osips/opensips/etc/opensips /etc
sudo systemctl restart opensips
To verify that OpenSIPS_ is running, run the following command:
.. code-block:: bash
opensips-cli -x mi uptime
Since we are using OpenSIPS_ with DRouting module we have to set up a SIP entity that OpenSIPS_ can forward the calls to for our setup.
In this example we use SIPp a free Open Source test tool / traffic generator for the SIP protocol.
The install SiPp use commands below :
.. code-block:: bash
apt update
apt install git pkg-config dh-autoreconf ncurses-dev build-essential libssl-dev libpcap-dev libncurses5-dev libsctp-dev lksctp-tools cmake
git clone https://github.com/SIPp/sipp.git
cd sipp
git checkout v3.7.0
git submodule init
git submodule update
./build.sh --common
cmake . -DUSE_SSL=1 -DUSE_SCTP=0 -DUSE_PCAP=1 -DUSE_GSL=1
make all
make install
Write SIPp XML scenario named uas.xml or to your liking with the content below,this scenario will simulate calls with OpenSIPS_ .
Change "OpenSIPS_IP" in the line *<sip:OpenSIPS_IP:[local_port];transport=[transport]>* with your OpenSIPS_ IP.
.. code-block:: XML
<!-- This program is free software; you can redistribute it and/or -->
<!-- modify it under the terms of the GNU General Public License as -->
<!-- published by the Free Software Foundation; either version 2 of the -->
<!-- License, or (at your option) any later version. -->
<!-- -->
<!-- This program is distributed in the hope that it will be useful, -->
<!-- but WITHOUT ANY WARRANTY; without even the implied warranty of -->
<!-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -->
<!-- GNU General Public License for more details. -->
<!-- -->
<!-- You should have received a copy of the GNU General Public License -->
<!-- along with this program; if not, write to the -->
<!-- Free Software Foundation, Inc., -->
<!-- 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA -->
<!-- -->
<!-- Sipp default 'uas' scenario. -->
<!-- -->
<scenario name="Basic UAS responder">
<!-- By adding rrs="true" (Record Route Sets), the route sets -->
<!-- are saved and used for following messages sent. Useful to test -->
<!-- against stateful SIP proxies/B2BUAs. -->
<!-- Adding ignoresdp="true" here would ignore the SDP data: that -->
<!-- can be useful if you want to reject reINVITEs and keep the -->
<!-- media stream flowing. -->
<recv request="INVITE" crlf="true"> </recv>
<!-- The '[last_*]' keyword is replaced automatically by the -->
<!-- specified header if it was present in the last message received -->
<!-- (except if it was a retransmission). If the header was not -->
<!-- present or if no message has been received, the '[last_*]' -->
<!-- keyword is discarded, and all bytes until the end of the line -->
<!-- are also discarded. -->
<!-- -->
<!-- If the specified header was present several times in the -->
<!-- message, all occurrences are concatenated (CRLF separated) -->
<!-- to be used in place of the '[last_*]' keyword. -->
<send>
<![CDATA[ SIP/2.0 180 Ringing [last_Via:] [last_From:] [last_To:];tag=[pid]SIPpTag01[call_number] [last_Call-ID:] [last_CSeq:] Contact: <sip:[local_ip]:[local_port];transport=[transport]> Content-Length: 0 ]]>
</send>
<send retrans="500">
<![CDATA[ SIP/2.0 200 OK [last_Via:] [last_From:] [last_To:];tag=[pid]SIPpTag01[call_number] [last_Call-ID:] [last_Record-Route:] [last_CSeq:] Contact: <sip:OpenSIPS_IP:[local_port];transport=[transport]> Content-Type: application/sdp Content-Length: [len] v=0 o=user1 53655765 2353687637 IN IP[local_ip_type] [local_ip] s=- c=IN IP[media_ip_type] [media_ip] t=0 0 m=audio [media_port] RTP/AVP 0 a=rtpmap:0 PCMU/8000 ]]>
</send>
<recv request="ACK" optional="true" rtd="true" crlf="true"> </recv>
<recv request="BYE"> </recv>
<send>
<![CDATA[ SIP/2.0 200 OK [last_Via:] [last_From:] [last_To:] [last_Call-ID:] [last_CSeq:] Contact: <sip:[local_ip]:[local_port];transport=[transport]> Content-Length: 0 ]]>
</send>
<!-- Keep the call open for a while in case the 200 is lost to be -->
<!-- able to retransmit it if we receive the BYE again. -->
<timewait milliseconds="4000"/>
<!-- definition of the response time repartition table (unit is ms) -->
<ResponseTimeRepartition value="10, 20, 30, 40, 50, 100, 150, 200"/>
<!-- definition of the call length repartition table (unit is ms) -->
<CallLengthRepartition value="10, 50, 100, 500, 1000, 5000, 10000"/>
</scenario>
Run the SIPp with the command below:
.. code-block:: bash
sipp -sf uas.xml -p 5082
**CGRateS** will be configured with the following subsystems enabled:
- **SessionS**: started as gateway between the SIP Server and rest of CGRateS subsystems;
- **ChargerS**: used to decide the number of billing runs for customer/supplier charging;
- **AttributeS**: used to populate extra data to requests (ie: prepaid/postpaid, passwords, paypal account, LCR profile);
- **RALs**: used to calculate costs as well as account bundle management;
- **SupplierS**: selection of suppliers for each session (in case of OpenSIPS_, it will work in tandem with their DRouting module);
- **StatS**: computing statistics in real-time regarding sessions and their charging;
- **ThresholdS**: monitoring and reacting to events coming from above subsystems;
- **EEs**: exporting rated CDRs from CGR StorDB (export path: */tmp*).
Just as with the SIP Servers, we have also prepared configurations and init scripts for CGRateS. And just as well, you can manage the CGRateS service using systemctl if you prefer. You can even start it using the cgr-engine binary, like so:
.. code-block:: bash
cgr-engine -config_path=<path_to_config> -logger=*stdout
.. note::
The logger flag from the command above is optional, it's usually more convenient for us to check for logs in the terminal that cgrates was started in rather than checking the syslog.
.. tabs::
.. group-tab:: FreeSWITCH
.. code-block:: bash
sudo /usr/share/cgrates/tutorials/fs_evsock/cgrates/etc/init.d/cgrates start
.. group-tab:: Asterisk
.. code-block:: bash
sudo /usr/share/cgrates/tutorials/asterisk_ari/cgrates/etc/init.d/cgrates start
.. group-tab:: Kamailio
.. code-block:: bash
sudo /usr/share/cgrates/tutorials/kamevapi/cgrates/etc/init.d/cgrates start
.. group-tab:: OpenSIPS
.. code-block:: bash
sudo systemctl restart opensips
.. note::
If you have chosen OpenSIPS_, CGRateS has to be started first since the dependency is reversed.
Loading **CGRateS** Tariff Plans
--------------------------------
Now that we have **CGRateS** installed and started with one of the custom configurations, we can load the prepared data out of the shared folder, containing the following rules:
- Create the necessary timings (always, asap, peak, offpeak).
- Configure 3 destinations (1002, 1003 and 10 used as catch all rule).
- As rating we configure the following:
- Rate id: *RT_10CNT* with connect fee of 20cents, 10cents per minute for the first 60s in 60s increments followed by 5cents per minute in 1s increments.
- Rate id: *RT_20CNT* with connect fee of 40cents, 20cents per minute for the first 60s in 60s increments, followed by 10 cents per minute charged in 1s increments.
- Rate id: *RT_40CNT* with connect fee of 80cents, 40cents per minute for the first 60s in 60s increments, follwed by 20cents per minute charged in 10s increments.
- Rate id: *RT_1CNT* having no connect fee and a rate of 1 cent per minute, chargeable in 1 minute increments.
- Rate id: *RT_1CNT_PER_SEC* having no connect fee and a rate of 1 cent per second, chargeable in 1 second increments.
- Accounting part will have following configured:
- Create 3 accounts: 1001, 1002, 1003.
- 1001, 1002 will receive 10units of **\*monetary** balance.
.. code-block:: bash
cgr-loader -verbose -path=/usr/share/cgrates/tariffplans/tutorial
To verify that all actions successfully performed, we use following *cgr-console* commands:
- Make sure all our balances were topped-up:
.. code-block:: bash
cgr-console 'accounts Tenant="cgrates.org" AccountIds=["1001"]'
cgr-console 'accounts Tenant="cgrates.org" AccountIds=["1002"]'
- Query call costs so we can see our calls will have expected costs (final cost will result as sum of *ConnectFee* and *Cost* fields):
.. code-block:: bash
cgr-console 'cost Category="call" Tenant="cgrates.org" Subject="1001" Destination="1002" AnswerTime="2014-08-04T13:00:00Z" Usage="20s"'
cgr-console 'cost Category="call" Tenant="cgrates.org" Subject="1001" Destination="1002" AnswerTime="2014-08-04T13:00:00Z" Usage="1m25s"'
cgr-console 'cost Category="call" Tenant="cgrates.org" Subject="1001" Destination="1003" AnswerTime="2014-08-04T13:00:00Z" Usage="20s"'
Test calls
----------
1001 -> 1002
~~~~~~~~~~~~
Since the user 1001 is marked as *prepaid* inside the telecom switch, calling between 1001 and 1002 should generate pre-auth and prepaid debits which can be checked with *accounts* command integrated within *cgr-console* tool. Charging will be done based on time of day as described in the tariff plan definition above.
.. note::
An important particularity to note here is the ability of **CGRateS** SessionManager to refund units booked in advance (eg: if debit occurs every 10s and rate increments are set to 1s, the SessionManager will be smart enough to refund pre-booked credits for calls stoped in the middle of debit interval).
Check that 1001 balance is properly deducted, during the call, and moreover considering that general balance has priority over the shared one debits for this call should take place at first out of general balance.
.. code-block:: bash
cgr-console 'accounts Tenant="cgrates.org" AccountIds=["1001"]'
1002 -> 1001
~~~~~~~~~~~~
The user 1002 is marked as *postpaid* inside the telecom switch hence his calls will be debited at the end of the call instead of during a call and his balance will be able to go on negative without influencing his new calls (no pre-auth).
To check that we had debits we use again console command, this time not during the call but at the end of it:
.. code-block:: bash
cgr-console 'accounts Tenant="cgrates.org" AccountIds=["1002"]'
1001 -> 1003
~~~~~~~~~~~~
The user 1001 call user 1003 and after 12 seconds the call will be disconnected.
CDR Processing
--------------
- The SIP Server generates a CDR event at the end of each call (i.e., FreeSWITCH_ via HTTP Post and Kamailio_ via evapi)
- The event is directed towards the port configured inside cgrates.json due to the automatic handler registration built into the SessionS subsystem.
- The event reaches CGRateS through the SessionS subsystem in close to real-time.
- Once inside CGRateS, the event is instantly rated and ready for export.
CDR Exporting
-------------
Once the CDRs are mediated, they are available to be exported. To export them, you first need to configure your EEs in configs (already done by the cgrates script from earlier). Important fields to populate are "id" (sample: tutorial_export), "type" (sample: *file_csv), "export_path" (sample: /tmp), and "fields" where you define all the data that you want to export. After that, you can use available RPC APIs or directly call export_cdrs from the console to export them:
.. code-block:: bash
cgr-console 'export_cdrs ExporterIDs=["tutorial_export"]'
Your exported files will be appear on your defined "export_path" folder after the command is executed. In this case the folder is /tmp
For all available parameters you can check by running ``cgr-console help export_cdrs``.
Fraud detection
---------------
We have configured some action triggers for our tariffplans where more than 20 units of balance topped-up triggers a notification over syslog, and most importantly, an action trigger to monitor for 100 or more units topped-up which will also trigger an account disable together with killing it's calls if prepaid debits are used.
To verify this mechanism simply add some random units into one account's balance:
.. code-block:: bash
cgr-console 'balance_set Tenant="cgrates.org" Account="1003" Value=23 BalanceType="*monetary" Balance={"ID":"MonetaryBalance"}'
tail -f /var/log/syslog -n 20
cgr-console 'balance_set Tenant="cgrates.org" Account="1001" Value=101 BalanceType="*monetary" Balance={"ID":"MonetaryBalance"}'
tail -f /var/log/syslog -n 20
On the CDRs side we will be able to integrate CdrStats monitors as part of our Fraud Detection system (eg: the increase of average cost for 1001 and 1002 accounts will signal us abnormalities, hence we will be notified via syslog).
.. _Zoiper: https://www.zoiper.com/
.. _Asterisk: http://www.asterisk.org/
.. _FreeSWITCH: https://freeswitch.com/
.. _Kamailio: https://www.kamailio.org/w/
.. _OpenSIPS: https://opensips.org/
.. _CGRateS: http://www.cgrates.org/