mirror of
https://github.com/cgrates/cgrates.git
synced 2026-02-14 20:59:53 +05:00
`ClientConnector` is no longer defined within `rpcclient` in its latest
version. It has been changed to be obtained from the `cgrates/birpc`
library instead.
Replaced `net/rpc` with `cgrates/birpc` and `net/rpc/jsonrpc` with
`cgrates/birpc/jsonrpc` libraries.
The implementations of `CallBiRPC()` and `Handlers()` were removed,
along with the methods associated with them.
The `rpcclient.BIRPCConector` and the methods prefixed with `BiRPC` were
removed from the `BiRPClient` interface.
The `BiRPClient` interface was renamed to `BIRPCClient`, although not
sure if needed (seems useful just to test if the structure is correct).
`rpcclient.BiRPCConector` has been replaced with `context.ClientConnector`,
which is now passed alongside `context.Context` within the same struct
(`cgrates/birpc/context.Context`). Consequently, functions that were
previously relying on it are now receiving the context instead. The
changes were made in the following functions:
- `engine/connmanager.go` - `*ConnManager.Call`
- `engine/connmanager.go` - `*ConnManager.getConn`
- `engine/connmanager.go` - `*ConnManager.getConnWithConfig`
- `engine/libengine.go` - `NewRPCPool`
- `engine/libengine.go` - `NewRPCConnection`
- `agents/libagents.go` - `processRequest`
Compilation errors related to the `rpcclient.NewRPCClient` function were
resolved by adding the missing `context`, `max_reconnect_interval`, and
`delayFunc` parameters. Additionally, context was added to all calls made
by the client. An effort was made to avoid passing hardcoded values as
much as possible, and extra flags were added where necessary for cgr
binaries.
The `max_reconnect_interval` parameter is now passed from parent
functions, which required adjustments to the function signature.
A new context field was added to all agent objects to ensure access to
it before sending it to the `connmanager's Call`, effectively replacing
`birpcclient`. Although an alternative would have been to create the
new service and add it to the context right before passing it to the
handlers, the chosen approach is definitely more comfortable.
With the addition of a context field for the SIP servers agents, an
additional error needed to be handled, coming from the creation of the
service. Agent constructors within the services package log errors as
they occur and return. Alternate solutions considered were either
shutting down the engine instead of returning, or just logging the
occurrence as a warning, particularly when the `ctx.Client` isn't
required, especially in cases where bidirectional connections are not
needed. For the latter option, it's crucial to return the object with
the error rather than nil or to make the error nil immediately after
logging.
Context has been integrated into all internal Call implementations to
ensure the objects conform to the `birpc.ClientConnector` interface.
These implementations will be removed in the near future as all service
objects are being wrapped in a `birpc.Service` type that satisfies the
`birpc.ClientConnector` interface. Currently, they are being retained
as a reference in case of any unexpected issues that arise.
Ensured that the `birpc.Service` wrapped service objects are passed to
the internal channel getters rather than the objects themselves.
Add context.TODO() to all \*ConnManager.Call function calls. To be
replaced with the context passed to the Method, when available.
For all `*ConnManager.Call` function calls, `context.TODO()` has been
added. This will be replaced with the context passed to the method when
it becomes available.
The value returned by StringGetOpts is now passed directly to the
FirstNonEmpty function, instead of being assigned to a variable
first.
The implementation of the `*AnalyzerService.GetInternalBiRPCCodec`
function has been removed from the services package. Additionally,
the AnalyzerBiRPCConnector type definition and its associated methods
have been removed.
The codec implementation has been revised to include the following
changes:
- `rpc.ServerCodec` -> `birpc.ServerCodec`;
- `rpc2.ServerCodec` -> `birpc.BirpcCodec`;
- `rpc2.Request` -> `birpc.Request`;
- `rpc2.Response` -> `birpc.Response`;
- The constructors for json and gob birpc codecs in `cenkalti/rpc`
have been replaced with ones from the `birpc/jsonrpc` library;
- The gob codec implementation has been removed in favor of the
version already implemented in the birpc external library.
The server implementation has been updated with the following changes:
- A field that represents a simple RPC server has been added to the
Server struct;
- Both the simple and bidirectional RPC servers are now initialized
inside the Server constructor, eliminating the need for nil checks;
- Usage of `net/rpc` and `cenkalti/rpc2` has been replaced with
`cgrates/birpc`;
- Additional `(Bi)RPCUnregisterName` methods have been added;
- The implementations for (bi)json/gob servers have been somewhat
simplified.
Before deleting the Call functions and using the `birpc.NewService`
method to register the methods for all cgrates components, update the
Call functions to satisfy the `birpc.ClientConnector` interface. This
way it will be a bit safer. Had to be done for SessionS though.
The `BiRPCCall` method has been removed from coreutils.go. The
`RPCCall` and `APIerRPCCall` methods are also to be removed in the
future.
Ensured that all methods for `SessionSv1` and `SessionS` have the
correct function signature with context. The same adjustments were made
for the session dispatcher methods and for the `SessionSv1Interface`.
Also removed sessionsbirpc.go and smgbirpc.go files.
Implemented the following methods to help with the registration of
methods across all subsystems:
- `NewServiceWithName`;
- `NewDispatcherService` for all dispatcher methods;
- `NewService` for the remaining methods that are already named
correctly.
Compared to the constructor from the external library, these also make
sure that the naming of the methods is consistent with our constants.
Added context to the Call methods for the mock client connectors (used
in tests).
Removed unused rpc fields from inside the following services:
- EeS
- LoaderS
- ResourceS
- RouteS
- StatS
- ThresholdS
- SessionS
- CoreS
Updated the methods implementing the logic for API methods to align
with the latest changes, ensuring consistency and correctness. The
modifications include:
- Adjusting the function signature to the new format
(ctx, args, reply).
- Prefixing names with 'V*' to indicate that they are utilized by
or registered as APIs.
- Containing the complete logic within the methods, enabling APIs
to call them and return their reply directly.
The subsystems affected by these changes are detailed as follows:
- CoreS: Additional methods were implementing utilizing the
existing ones. Though modifying them directly was possible, certain
methods (e.g., StopCPUProfiling()) were used elsewhere and not as
RPC requests.
- CDRs: Renamed V1CountCDRs to V1GetCDRsCount.
- StatS: V1GetQueueFloatMetrics, V1GetQueueStringMetrics,
V1GetStatQueue accept different arguments compared to API functions
(opted to register StatSv1 instead).
- ResourceS: Renamed V1ResourcesForEvent to V1GetResourcesForEvent
to align with API naming.
- DispatcherS: Renamed V1GetProfilesForEvent to
DispatcherSv1GetProfilesForEvent.
- For the rest, adding context to the function signature was enough.
In the unit tests, wrapping the object within a biprc.Service is now
ensured before passing it to the internal connections map under the
corresponding key.
Some tests that are covering error cases, are also checking the other
return value besides the error. That check has been removed since it
is redundant.
Revised the RPC/BiRPC clients' constructors (for use in tests)
A different approach has been chosen for the handling of ping methods
within subsystems. Instead of defining the same structure in every file,
the ping methods were added inside the Service constructor function.
Though the existing Ping methods were left as they were, they will be
removed in the future.
An additional method has been implemented to register the Ping method
from outside of the engine package.
Implemented Sleep and CapsError methods for SessionS (before they were
exclusively for bidirectional use, I believe).
A specific issue has been fixed within the CapsError SessionSv1 API
implementation, which is designed to overwrite methods that cannot be
allocated due to the threshold limit being reached. Previously, it was
deallocating when writing the response, even when a spot hadn't been
allocated in the first place (due to the cap being hit). The reason
behind this, especially why the test was passing before, still needs
to be looked into, as the problem should have occurred from before.
Implement `*SessionSv1.RegisterInternalBiJSONConn` method in apier.
All agent methods have been registered under the SessionSv1 name. For
the correct method names, the leading "V1" prefix has been trimmed
using the `birpc.NewServiceWithMethodsRename` function.
Revise the RegisterRpcParams function to populate the parameters
while relying on the `*birpc.Service` type instead. This will
automatically also deal with the validation. At the moment,
any error encountered is logged without being returned. Might
be changed in the future.
Inside the cgrRPCAction function, `mapstructure.Decode`'s output parameter
is now guaranteed to always be a pointer.
Updated go.mod and go.sum.
Fixed some typos.
728 lines
27 KiB
Go
728 lines
27 KiB
Go
/*
|
|
Real-time Online/Offline Charging System (OCS) for Telecom & ISP environments
|
|
Copyright (C) ITsysCOM GmbH
|
|
|
|
This program is free software: you can redistribute it and/or modify
|
|
it under the terms of the GNU General Public License as published by
|
|
the Free Software Foundation, either version 3 of the License, or
|
|
(at your option) any later version.
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
GNU General Public License for more details.
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
along with this program. If not, see <http://www.gnu.org/licenses/>
|
|
*/
|
|
|
|
package main
|
|
|
|
import (
|
|
"flag"
|
|
"fmt"
|
|
"io"
|
|
"log"
|
|
"os"
|
|
"os/signal"
|
|
"path"
|
|
"runtime"
|
|
"runtime/pprof"
|
|
"strconv"
|
|
"strings"
|
|
"sync"
|
|
"syscall"
|
|
"time"
|
|
|
|
"github.com/cgrates/birpc"
|
|
"github.com/cgrates/birpc/context"
|
|
"github.com/cgrates/cgrates/cores"
|
|
"github.com/cgrates/cgrates/loaders"
|
|
"github.com/cgrates/cgrates/registrarc"
|
|
|
|
v1 "github.com/cgrates/cgrates/apier/v1"
|
|
"github.com/cgrates/cgrates/config"
|
|
"github.com/cgrates/cgrates/engine"
|
|
"github.com/cgrates/cgrates/services"
|
|
"github.com/cgrates/cgrates/servmanager"
|
|
"github.com/cgrates/cgrates/utils"
|
|
"github.com/cgrates/rpcclient"
|
|
)
|
|
|
|
var (
|
|
cgrEngineFlags = flag.NewFlagSet(utils.CgrEngine, flag.ContinueOnError)
|
|
cfgPath = cgrEngineFlags.String(utils.CfgPathCgr, utils.ConfigPath, "Configuration directory path.")
|
|
version = cgrEngineFlags.Bool(utils.VersionCgr, false, "Prints the application version.")
|
|
checkConfig = cgrEngineFlags.Bool(utils.CheckCfgCgr, false, "Verify the config without starting the engine")
|
|
pidFile = cgrEngineFlags.String(utils.PidCgr, utils.EmptyString, "Write pid file")
|
|
httpPprofPath = cgrEngineFlags.String(utils.HttpPrfPthCgr, utils.EmptyString, "http address used for program profiling")
|
|
cpuProfDir = cgrEngineFlags.String(utils.CpuProfDirCgr, utils.EmptyString, "write cpu profile to files")
|
|
memProfDir = cgrEngineFlags.String(utils.MemProfDirCgr, utils.EmptyString, "write memory profile to file")
|
|
memProfInterval = cgrEngineFlags.Duration(utils.MemProfIntervalCgr, 5*time.Second, "Time between memory profile saves")
|
|
memProfNrFiles = cgrEngineFlags.Int(utils.MemProfNrFilesCgr, 1, "Number of memory profile to write")
|
|
scheduledShutdown = cgrEngineFlags.String(utils.ScheduledShutdownCgr, utils.EmptyString, "shutdown the engine after this duration")
|
|
singlecpu = cgrEngineFlags.Bool(utils.SingleCpuCgr, false, "Run on single CPU core")
|
|
syslogger = cgrEngineFlags.String(utils.LoggerCfg, utils.EmptyString, "logger <*syslog|*stdout>")
|
|
nodeID = cgrEngineFlags.String(utils.NodeIDCfg, utils.EmptyString, "The node ID of the engine")
|
|
logLevel = cgrEngineFlags.Int(utils.LogLevelCfg, -1, "Log level (0-emergency to 7-debug)")
|
|
preload = cgrEngineFlags.String(utils.PreloadCgr, utils.EmptyString, "LoaderIDs used to load the data before the engine starts")
|
|
|
|
cfg *config.CGRConfig
|
|
)
|
|
|
|
// startFilterService fires up the FilterS
|
|
func startFilterService(filterSChan chan *engine.FilterS, cacheS *engine.CacheS, connMgr *engine.ConnManager, cfg *config.CGRConfig,
|
|
dm *engine.DataManager) {
|
|
<-cacheS.GetPrecacheChannel(utils.CacheFilters)
|
|
filterSChan <- engine.NewFilterS(cfg, connMgr, dm)
|
|
}
|
|
|
|
// initCacheS inits the CacheS and starts precaching as well as populating internal channel for RPC conns
|
|
func initCacheS(internalCacheSChan chan birpc.ClientConnector,
|
|
server *cores.Server, dm *engine.DataManager, shdChan *utils.SyncedChan,
|
|
anz *services.AnalyzerService,
|
|
cpS *engine.CapsStats) (*engine.CacheS, error) {
|
|
chS := engine.NewCacheS(cfg, dm, cpS)
|
|
go func() {
|
|
if err := chS.Precache(); err != nil {
|
|
utils.Logger.Crit(fmt.Sprintf("<%s> could not init, error: %s", utils.CacheS, err.Error()))
|
|
shdChan.CloseOnce()
|
|
}
|
|
}()
|
|
|
|
srv, err := engine.NewService(chS)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
if !cfg.DispatcherSCfg().Enabled {
|
|
for _, s := range srv {
|
|
server.RpcRegister(s)
|
|
}
|
|
}
|
|
internalCacheSChan <- anz.GetInternalCodec(srv, utils.CacheS)
|
|
return chS, nil
|
|
}
|
|
|
|
func initGuardianSv1(internalGuardianSChan chan birpc.ClientConnector, server *cores.Server,
|
|
anz *services.AnalyzerService) error {
|
|
srv, err := engine.NewService(v1.NewGuardianSv1())
|
|
if err != nil {
|
|
return err
|
|
}
|
|
if !cfg.DispatcherSCfg().Enabled {
|
|
for _, s := range srv {
|
|
server.RpcRegister(s)
|
|
}
|
|
}
|
|
internalGuardianSChan <- anz.GetInternalCodec(srv, utils.GuardianS)
|
|
return nil
|
|
}
|
|
|
|
func initServiceManagerV1(internalServiceManagerChan chan birpc.ClientConnector,
|
|
srvMngr *servmanager.ServiceManager, server *cores.Server,
|
|
anz *services.AnalyzerService) error {
|
|
srv, err := engine.NewService(v1.NewServiceManagerV1(srvMngr))
|
|
if err != nil {
|
|
return err
|
|
}
|
|
if !cfg.DispatcherSCfg().Enabled {
|
|
for _, s := range srv {
|
|
server.RpcRegister(s)
|
|
}
|
|
}
|
|
internalServiceManagerChan <- anz.GetInternalCodec(srv, utils.ServiceManager)
|
|
return nil
|
|
}
|
|
|
|
func initConfigSv1(internalConfigChan chan birpc.ClientConnector,
|
|
server *cores.Server, anz *services.AnalyzerService) error {
|
|
srv, err := engine.NewService(v1.NewConfigSv1(cfg))
|
|
if err != nil {
|
|
return err
|
|
}
|
|
if !cfg.DispatcherSCfg().Enabled {
|
|
for _, s := range srv {
|
|
server.RpcRegister(s)
|
|
}
|
|
}
|
|
internalConfigChan <- anz.GetInternalCodec(srv, utils.ConfigSv1)
|
|
return nil
|
|
}
|
|
|
|
func startRPC(server *cores.Server, internalRaterChan,
|
|
internalCdrSChan, internalRsChan, internalStatSChan,
|
|
internalAttrSChan, internalChargerSChan, internalThdSChan, internalSuplSChan,
|
|
internalSMGChan, internalAnalyzerSChan, internalDispatcherSChan,
|
|
internalLoaderSChan, internalRALsv1Chan, internalCacheSChan,
|
|
internalEEsChan chan birpc.ClientConnector,
|
|
shdChan *utils.SyncedChan) {
|
|
if !cfg.DispatcherSCfg().Enabled {
|
|
select { // Any of the rpc methods will unlock listening to rpc requests
|
|
case resp := <-internalRaterChan:
|
|
internalRaterChan <- resp
|
|
case cdrs := <-internalCdrSChan:
|
|
internalCdrSChan <- cdrs
|
|
case smg := <-internalSMGChan:
|
|
internalSMGChan <- smg
|
|
case rls := <-internalRsChan:
|
|
internalRsChan <- rls
|
|
case statS := <-internalStatSChan:
|
|
internalStatSChan <- statS
|
|
case attrS := <-internalAttrSChan:
|
|
internalAttrSChan <- attrS
|
|
case chrgS := <-internalChargerSChan:
|
|
internalChargerSChan <- chrgS
|
|
case thS := <-internalThdSChan:
|
|
internalThdSChan <- thS
|
|
case splS := <-internalSuplSChan:
|
|
internalSuplSChan <- splS
|
|
case analyzerS := <-internalAnalyzerSChan:
|
|
internalAnalyzerSChan <- analyzerS
|
|
case loaderS := <-internalLoaderSChan:
|
|
internalLoaderSChan <- loaderS
|
|
case ralS := <-internalRALsv1Chan:
|
|
internalRALsv1Chan <- ralS
|
|
case chS := <-internalCacheSChan: // added in order to start the RPC before precaching is done
|
|
internalCacheSChan <- chS
|
|
case eeS := <-internalEEsChan:
|
|
internalEEsChan <- eeS
|
|
case <-shdChan.Done():
|
|
return
|
|
}
|
|
} else {
|
|
select {
|
|
case dispatcherS := <-internalDispatcherSChan:
|
|
internalDispatcherSChan <- dispatcherS
|
|
case <-shdChan.Done():
|
|
return
|
|
}
|
|
}
|
|
|
|
go server.ServeJSON(cfg.ListenCfg().RPCJSONListen, shdChan)
|
|
go server.ServeGOB(cfg.ListenCfg().RPCGOBListen, shdChan)
|
|
go server.ServeHTTP(
|
|
cfg.ListenCfg().HTTPListen,
|
|
cfg.HTTPCfg().HTTPJsonRPCURL,
|
|
cfg.HTTPCfg().HTTPWSURL,
|
|
cfg.HTTPCfg().HTTPUseBasicAuth,
|
|
cfg.HTTPCfg().HTTPAuthUsers,
|
|
shdChan,
|
|
)
|
|
if (len(cfg.ListenCfg().RPCGOBTLSListen) != 0 ||
|
|
len(cfg.ListenCfg().RPCJSONTLSListen) != 0 ||
|
|
len(cfg.ListenCfg().HTTPTLSListen) != 0) &&
|
|
(len(cfg.TLSCfg().ServerCerificate) == 0 ||
|
|
len(cfg.TLSCfg().ServerKey) == 0) {
|
|
utils.Logger.Warning("WARNING: missing TLS certificate/key file!")
|
|
return
|
|
}
|
|
if cfg.ListenCfg().RPCGOBTLSListen != utils.EmptyString {
|
|
go server.ServeGOBTLS(
|
|
cfg.ListenCfg().RPCGOBTLSListen,
|
|
cfg.TLSCfg().ServerCerificate,
|
|
cfg.TLSCfg().ServerKey,
|
|
cfg.TLSCfg().CaCertificate,
|
|
cfg.TLSCfg().ServerPolicy,
|
|
cfg.TLSCfg().ServerName,
|
|
shdChan,
|
|
)
|
|
}
|
|
if cfg.ListenCfg().RPCJSONTLSListen != utils.EmptyString {
|
|
go server.ServeJSONTLS(
|
|
cfg.ListenCfg().RPCJSONTLSListen,
|
|
cfg.TLSCfg().ServerCerificate,
|
|
cfg.TLSCfg().ServerKey,
|
|
cfg.TLSCfg().CaCertificate,
|
|
cfg.TLSCfg().ServerPolicy,
|
|
cfg.TLSCfg().ServerName,
|
|
shdChan,
|
|
)
|
|
}
|
|
if cfg.ListenCfg().HTTPTLSListen != utils.EmptyString {
|
|
go server.ServeHTTPTLS(
|
|
cfg.ListenCfg().HTTPTLSListen,
|
|
cfg.TLSCfg().ServerCerificate,
|
|
cfg.TLSCfg().ServerKey,
|
|
cfg.TLSCfg().CaCertificate,
|
|
cfg.TLSCfg().ServerPolicy,
|
|
cfg.TLSCfg().ServerName,
|
|
cfg.HTTPCfg().HTTPJsonRPCURL,
|
|
cfg.HTTPCfg().HTTPWSURL,
|
|
cfg.HTTPCfg().HTTPUseBasicAuth,
|
|
cfg.HTTPCfg().HTTPAuthUsers,
|
|
shdChan,
|
|
)
|
|
}
|
|
}
|
|
|
|
func writePid() {
|
|
utils.Logger.Info(*pidFile)
|
|
f, err := os.Create(*pidFile)
|
|
if err != nil {
|
|
log.Fatal("Could not write pid file: ", err)
|
|
}
|
|
f.WriteString(strconv.Itoa(os.Getpid()))
|
|
if err := f.Close(); err != nil {
|
|
log.Fatal("Could not write pid file: ", err)
|
|
}
|
|
}
|
|
|
|
func singnalHandler(shdWg *sync.WaitGroup, shdChan *utils.SyncedChan) {
|
|
shutdownSignal := make(chan os.Signal, 1)
|
|
reloadSignal := make(chan os.Signal, 1)
|
|
signal.Notify(shutdownSignal, os.Interrupt,
|
|
syscall.SIGTERM, syscall.SIGINT, syscall.SIGQUIT)
|
|
signal.Notify(reloadSignal, syscall.SIGHUP)
|
|
for {
|
|
select {
|
|
case <-shdChan.Done():
|
|
shdWg.Done()
|
|
return
|
|
case <-shutdownSignal:
|
|
shdChan.CloseOnce()
|
|
shdWg.Done()
|
|
return
|
|
case <-reloadSignal:
|
|
// do it in it's own gorutine in order to not block the signal handler with the reload functionality
|
|
go func() {
|
|
var reply string
|
|
if err := config.CgrConfig().V1ReloadConfig(
|
|
context.TODO(),
|
|
&config.ReloadArgs{
|
|
Section: utils.EmptyString,
|
|
Path: config.CgrConfig().ConfigPath, // use the same path
|
|
}, &reply); err != nil {
|
|
utils.Logger.Warning(
|
|
fmt.Sprintf("Error reloading configuration: <%s>", err))
|
|
}
|
|
}()
|
|
}
|
|
}
|
|
}
|
|
|
|
func runPreload(loader *services.LoaderService, internalLoaderSChan chan birpc.ClientConnector,
|
|
shdChan *utils.SyncedChan) {
|
|
if !cfg.LoaderCfg().Enabled() {
|
|
utils.Logger.Err(fmt.Sprintf("<%s> not enabled but required by preload mechanism", utils.LoaderS))
|
|
shdChan.CloseOnce()
|
|
return
|
|
}
|
|
|
|
ldrs := <-internalLoaderSChan
|
|
internalLoaderSChan <- ldrs
|
|
|
|
var reply string
|
|
for _, loaderID := range strings.Split(*preload, utils.FieldsSep) {
|
|
if err := loader.GetLoaderS().V1Load(context.TODO(),
|
|
&loaders.ArgsProcessFolder{
|
|
ForceLock: true, // force lock will unlock the file in case is locked and return error
|
|
LoaderID: loaderID,
|
|
StopOnError: true,
|
|
}, &reply); err != nil {
|
|
utils.Logger.Err(fmt.Sprintf("<%s> preload failed on loadID <%s> , err: <%s>", utils.LoaderS, loaderID, err.Error()))
|
|
shdChan.CloseOnce()
|
|
return
|
|
}
|
|
}
|
|
}
|
|
|
|
func main() {
|
|
if err := cgrEngineFlags.Parse(os.Args[1:]); err != nil {
|
|
log.Fatalf("<%s> error received: <%s>, exiting!", utils.InitS, err.Error())
|
|
}
|
|
vers, err := utils.GetCGRVersion()
|
|
if err != nil {
|
|
log.Fatalf("<%s> error received: <%s>, exiting!", utils.InitS, err.Error())
|
|
}
|
|
goVers := runtime.Version()
|
|
if *version {
|
|
fmt.Println(vers)
|
|
return
|
|
}
|
|
if *pidFile != utils.EmptyString {
|
|
writePid()
|
|
}
|
|
if *singlecpu {
|
|
runtime.GOMAXPROCS(1) // Having multiple cpus may slow down computing due to CPU management, to be reviewed in future Go releases
|
|
}
|
|
|
|
shdWg := new(sync.WaitGroup)
|
|
shdChan := utils.NewSyncedChan()
|
|
|
|
shdWg.Add(1)
|
|
go singnalHandler(shdWg, shdChan)
|
|
|
|
var cS *cores.CoreService
|
|
var stopMemProf chan struct{}
|
|
var memPrfDirForCores string
|
|
if *memProfDir != utils.EmptyString {
|
|
shdWg.Add(1)
|
|
stopMemProf = make(chan struct{})
|
|
memPrfDirForCores = *memProfDir
|
|
go cores.MemProfiling(*memProfDir, *memProfInterval, *memProfNrFiles, shdWg, stopMemProf, shdChan)
|
|
defer func() {
|
|
if cS == nil {
|
|
close(stopMemProf)
|
|
}
|
|
}()
|
|
}
|
|
|
|
var cpuProfileFile io.Closer
|
|
if *cpuProfDir != utils.EmptyString {
|
|
cpuPath := path.Join(*cpuProfDir, utils.CpuPathCgr)
|
|
cpuProfileFile, err = cores.StartCPUProfiling(cpuPath)
|
|
if err != nil {
|
|
log.Fatalf("<%s> error received: <%s>, exiting!", utils.InitS, err.Error())
|
|
}
|
|
defer func() {
|
|
if cS != nil {
|
|
cS.StopCPUProfiling()
|
|
return
|
|
}
|
|
if cpuProfileFile != nil {
|
|
pprof.StopCPUProfile()
|
|
cpuProfileFile.Close()
|
|
}
|
|
}()
|
|
}
|
|
|
|
if *scheduledShutdown != utils.EmptyString {
|
|
shutdownDur, err := utils.ParseDurationWithNanosecs(*scheduledShutdown)
|
|
if err != nil {
|
|
log.Fatalf("<%s> error received: <%s>, exiting!", utils.InitS, err.Error())
|
|
}
|
|
shdWg.Add(1)
|
|
go func() { // Schedule shutdown
|
|
tm := time.NewTimer(shutdownDur)
|
|
select {
|
|
case <-tm.C:
|
|
shdChan.CloseOnce()
|
|
case <-shdChan.Done():
|
|
tm.Stop()
|
|
}
|
|
shdWg.Done()
|
|
}()
|
|
}
|
|
|
|
// Init config
|
|
cfg, err = config.NewCGRConfigFromPath(*cfgPath)
|
|
if err != nil {
|
|
log.Fatalf("Could not parse config: <%s>", err.Error())
|
|
return
|
|
}
|
|
if *checkConfig {
|
|
if err := cfg.CheckConfigSanity(); err != nil {
|
|
log.Fatalf("<%s> error received: <%s>, exiting!", utils.InitS, err.Error())
|
|
}
|
|
return
|
|
}
|
|
|
|
if *nodeID != utils.EmptyString {
|
|
cfg.GeneralCfg().NodeID = *nodeID
|
|
}
|
|
|
|
config.SetCgrConfig(cfg) // Share the config object
|
|
|
|
// init syslog
|
|
if utils.Logger, err = utils.Newlogger(utils.FirstNonEmpty(*syslogger,
|
|
cfg.GeneralCfg().Logger), cfg.GeneralCfg().NodeID); err != nil {
|
|
log.Fatalf("Could not initialize syslog connection, err: <%s>", err.Error())
|
|
return
|
|
}
|
|
lgLevel := cfg.GeneralCfg().LogLevel
|
|
if *logLevel != -1 { // Modify the log level if provided by command arguments
|
|
lgLevel = *logLevel
|
|
}
|
|
utils.Logger.SetLogLevel(lgLevel)
|
|
// init the concurrentRequests
|
|
cncReqsLimit := cfg.CoreSCfg().Caps
|
|
if utils.ConcurrentReqsLimit != 0 { // used as shared variable
|
|
cncReqsLimit = utils.ConcurrentReqsLimit
|
|
}
|
|
cncReqsStrategy := cfg.CoreSCfg().CapsStrategy
|
|
if len(utils.ConcurrentReqsStrategy) != 0 {
|
|
cncReqsStrategy = utils.ConcurrentReqsStrategy
|
|
}
|
|
caps := engine.NewCaps(cncReqsLimit, cncReqsStrategy)
|
|
utils.Logger.Info(fmt.Sprintf("<CoreS> starting version <%s><%s>", vers, goVers))
|
|
cfg.LazySanityCheck()
|
|
|
|
// init the channel here because we need to pass them to connManager
|
|
internalServeManagerChan := make(chan birpc.ClientConnector, 1)
|
|
internalConfigChan := make(chan birpc.ClientConnector, 1)
|
|
internalCoreSv1Chan := make(chan birpc.ClientConnector, 1)
|
|
internalCacheSChan := make(chan birpc.ClientConnector, 1)
|
|
internalGuardianSChan := make(chan birpc.ClientConnector, 1)
|
|
internalAnalyzerSChan := make(chan birpc.ClientConnector, 1)
|
|
internalCDRServerChan := make(chan birpc.ClientConnector, 1)
|
|
internalAttributeSChan := make(chan birpc.ClientConnector, 1)
|
|
internalDispatcherSChan := make(chan birpc.ClientConnector, 1)
|
|
internalSessionSChan := make(chan birpc.ClientConnector, 1)
|
|
internalChargerSChan := make(chan birpc.ClientConnector, 1)
|
|
internalThresholdSChan := make(chan birpc.ClientConnector, 1)
|
|
internalStatSChan := make(chan birpc.ClientConnector, 1)
|
|
internalResourceSChan := make(chan birpc.ClientConnector, 1)
|
|
internalRouteSChan := make(chan birpc.ClientConnector, 1)
|
|
internalSchedulerSChan := make(chan birpc.ClientConnector, 1)
|
|
internalRALsChan := make(chan birpc.ClientConnector, 1)
|
|
internalResponderChan := make(chan birpc.ClientConnector, 1)
|
|
internalAPIerSv1Chan := make(chan birpc.ClientConnector, 1)
|
|
internalAPIerSv2Chan := make(chan birpc.ClientConnector, 1)
|
|
internalLoaderSChan := make(chan birpc.ClientConnector, 1)
|
|
internalEEsChan := make(chan birpc.ClientConnector, 1)
|
|
|
|
// initialize the connManager before creating the DMService
|
|
// because we need to pass the connection to it
|
|
connManager := engine.NewConnManager(cfg, map[string]chan birpc.ClientConnector{
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaAnalyzer): internalAnalyzerSChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaApier): internalAPIerSv2Chan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaAttributes): internalAttributeSChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaCaches): internalCacheSChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaCDRs): internalCDRServerChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaChargers): internalChargerSChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaGuardian): internalGuardianSChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaLoaders): internalLoaderSChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaResources): internalResourceSChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaResponder): internalResponderChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaScheduler): internalSchedulerSChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaSessionS): internalSessionSChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaStats): internalStatSChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaRoutes): internalRouteSChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaThresholds): internalThresholdSChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaServiceManager): internalServeManagerChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaConfig): internalConfigChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaCore): internalCoreSv1Chan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaRALs): internalRALsChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaEEs): internalEEsChan,
|
|
utils.ConcatenatedKey(utils.MetaInternal, utils.MetaDispatchers): internalDispatcherSChan,
|
|
|
|
utils.ConcatenatedKey(rpcclient.BiRPCInternal, utils.MetaSessionS): internalSessionSChan,
|
|
})
|
|
srvDep := map[string]*sync.WaitGroup{
|
|
utils.AnalyzerS: new(sync.WaitGroup),
|
|
utils.APIerSv1: new(sync.WaitGroup),
|
|
utils.APIerSv2: new(sync.WaitGroup),
|
|
utils.AsteriskAgent: new(sync.WaitGroup),
|
|
utils.AttributeS: new(sync.WaitGroup),
|
|
utils.CDRServer: new(sync.WaitGroup),
|
|
utils.ChargerS: new(sync.WaitGroup),
|
|
utils.CoreS: new(sync.WaitGroup),
|
|
utils.DataDB: new(sync.WaitGroup),
|
|
utils.DiameterAgent: new(sync.WaitGroup),
|
|
utils.RegistrarC: new(sync.WaitGroup),
|
|
utils.DispatcherS: new(sync.WaitGroup),
|
|
utils.DNSAgent: new(sync.WaitGroup),
|
|
utils.EEs: new(sync.WaitGroup),
|
|
utils.ERs: new(sync.WaitGroup),
|
|
utils.FreeSWITCHAgent: new(sync.WaitGroup),
|
|
utils.GlobalVarS: new(sync.WaitGroup),
|
|
utils.HTTPAgent: new(sync.WaitGroup),
|
|
utils.KamailioAgent: new(sync.WaitGroup),
|
|
utils.LoaderS: new(sync.WaitGroup),
|
|
utils.RadiusAgent: new(sync.WaitGroup),
|
|
utils.RALService: new(sync.WaitGroup),
|
|
utils.ResourceS: new(sync.WaitGroup),
|
|
utils.ResponderS: new(sync.WaitGroup),
|
|
utils.RouteS: new(sync.WaitGroup),
|
|
utils.SchedulerS: new(sync.WaitGroup),
|
|
utils.SessionS: new(sync.WaitGroup),
|
|
utils.SIPAgent: new(sync.WaitGroup),
|
|
utils.StatS: new(sync.WaitGroup),
|
|
utils.StorDB: new(sync.WaitGroup),
|
|
utils.ThresholdS: new(sync.WaitGroup),
|
|
utils.AccountS: new(sync.WaitGroup),
|
|
}
|
|
gvService := services.NewGlobalVarS(cfg, srvDep)
|
|
shdWg.Add(1)
|
|
if err = gvService.Start(); err != nil {
|
|
log.Fatalf("<%s> error received: <%s>, exiting!", utils.InitS, err.Error())
|
|
}
|
|
dmService := services.NewDataDBService(cfg, connManager, srvDep)
|
|
if dmService.ShouldRun() { // Some services can run without db, ie: ERs
|
|
shdWg.Add(1)
|
|
if err = dmService.Start(); err != nil {
|
|
log.Fatalf("<%s> error received: <%s>, exiting!", utils.InitS, err.Error())
|
|
}
|
|
}
|
|
|
|
storDBService := services.NewStorDBService(cfg, srvDep)
|
|
if storDBService.ShouldRun() { // Some services can run without db, ie: ERs
|
|
shdWg.Add(1)
|
|
if err = storDBService.Start(); err != nil {
|
|
log.Fatalf("<%s> error received: <%s>, exiting!", utils.InitS, err.Error())
|
|
}
|
|
}
|
|
|
|
// Rpc/http server
|
|
server := cores.NewServer(caps)
|
|
if len(cfg.HTTPCfg().RegistrarSURL) != 0 {
|
|
server.RegisterHttpFunc(cfg.HTTPCfg().RegistrarSURL, registrarc.Registrar)
|
|
}
|
|
if cfg.ConfigSCfg().Enabled {
|
|
server.RegisterHttpFunc(cfg.ConfigSCfg().URL, config.HandlerConfigS)
|
|
}
|
|
if *httpPprofPath != utils.EmptyString {
|
|
server.RegisterProfiler(*httpPprofPath)
|
|
}
|
|
|
|
// Define internal connections via channels
|
|
filterSChan := make(chan *engine.FilterS, 1)
|
|
|
|
// init AnalyzerS
|
|
anz := services.NewAnalyzerService(cfg, server, filterSChan, shdChan, internalAnalyzerSChan, srvDep)
|
|
if anz.ShouldRun() {
|
|
shdWg.Add(1)
|
|
if err := anz.Start(); err != nil {
|
|
log.Fatalf("<%s> error received: <%s>, exiting!", utils.InitS, err.Error())
|
|
}
|
|
}
|
|
|
|
// init CoreSv1
|
|
|
|
coreS := services.NewCoreService(cfg, caps, server, internalCoreSv1Chan, anz, cpuProfileFile, memPrfDirForCores, shdWg, stopMemProf, shdChan, srvDep)
|
|
shdWg.Add(1)
|
|
if err := coreS.Start(); err != nil {
|
|
log.Fatalf("<%s> error received: <%s>, exiting!", utils.InitS, err.Error())
|
|
}
|
|
cS = coreS.GetCoreS()
|
|
|
|
// init CacheS
|
|
cacheS, err := initCacheS(internalCacheSChan, server, dmService.GetDM(), shdChan, anz, coreS.GetCoreS().CapsStats)
|
|
if err != nil {
|
|
log.Fatal(err)
|
|
}
|
|
engine.Cache = cacheS
|
|
|
|
// init GuardianSv1
|
|
err = initGuardianSv1(internalGuardianSChan, server, anz)
|
|
if err != nil {
|
|
log.Fatal(err)
|
|
}
|
|
|
|
// Start ServiceManager
|
|
srvManager := servmanager.NewServiceManager(cfg, shdChan, shdWg, connManager)
|
|
attrS := services.NewAttributeService(cfg, dmService, cacheS, filterSChan, server, internalAttributeSChan, anz, srvDep)
|
|
dspS := services.NewDispatcherService(cfg, dmService, cacheS, filterSChan, server, internalDispatcherSChan, connManager, anz, srvDep)
|
|
dspH := services.NewRegistrarCService(cfg, server, connManager, anz, srvDep)
|
|
chrS := services.NewChargerService(cfg, dmService, cacheS, filterSChan, server,
|
|
internalChargerSChan, connManager, anz, srvDep)
|
|
tS := services.NewThresholdService(cfg, dmService, cacheS, filterSChan, server, internalThresholdSChan, anz, srvDep)
|
|
stS := services.NewStatService(cfg, dmService, cacheS, filterSChan, server,
|
|
internalStatSChan, connManager, anz, srvDep)
|
|
reS := services.NewResourceService(cfg, dmService, cacheS, filterSChan, server,
|
|
internalResourceSChan, connManager, anz, srvDep)
|
|
routeS := services.NewRouteService(cfg, dmService, cacheS, filterSChan, server,
|
|
internalRouteSChan, connManager, anz, srvDep)
|
|
|
|
schS := services.NewSchedulerService(cfg, dmService, cacheS, filterSChan,
|
|
server, internalSchedulerSChan, connManager, anz, srvDep)
|
|
|
|
rals := services.NewRalService(cfg, cacheS, server,
|
|
internalRALsChan, internalResponderChan,
|
|
shdChan, connManager, anz, srvDep, filterSChan)
|
|
|
|
apiSv1 := services.NewAPIerSv1Service(cfg, dmService, storDBService, filterSChan, server, schS, rals.GetResponder(),
|
|
internalAPIerSv1Chan, connManager, anz, srvDep)
|
|
|
|
apiSv2 := services.NewAPIerSv2Service(apiSv1, cfg, server, internalAPIerSv2Chan, anz, srvDep)
|
|
|
|
cdrS := services.NewCDRServer(cfg, dmService, storDBService, filterSChan, server, internalCDRServerChan,
|
|
connManager, anz, srvDep)
|
|
|
|
smg := services.NewSessionService(cfg, dmService, server, internalSessionSChan, shdChan, connManager, anz, srvDep)
|
|
|
|
ldrs := services.NewLoaderService(cfg, dmService, filterSChan, server,
|
|
internalLoaderSChan, connManager, anz, srvDep)
|
|
|
|
srvManager.AddServices(gvService, attrS, chrS, tS, stS, reS, routeS, schS, rals,
|
|
apiSv1, apiSv2, cdrS, smg, coreS,
|
|
services.NewEventReaderService(cfg, filterSChan, shdChan, connManager, srvDep),
|
|
services.NewDNSAgent(cfg, filterSChan, shdChan, connManager, srvDep),
|
|
services.NewFreeswitchAgent(cfg, shdChan, connManager, srvDep),
|
|
services.NewKamailioAgent(cfg, shdChan, connManager, srvDep),
|
|
services.NewAsteriskAgent(cfg, shdChan, connManager, srvDep), // partial reload
|
|
services.NewRadiusAgent(cfg, filterSChan, shdChan, connManager, srvDep), // partial reload
|
|
services.NewDiameterAgent(cfg, filterSChan, shdChan, connManager, srvDep), // partial reload
|
|
services.NewHTTPAgent(cfg, filterSChan, server, connManager, srvDep), // no reload
|
|
ldrs, anz, dspS, dspH, dmService, storDBService,
|
|
services.NewEventExporterService(cfg, filterSChan,
|
|
connManager, server, internalEEsChan, anz, srvDep),
|
|
services.NewSIPAgent(cfg, filterSChan, shdChan, connManager, srvDep),
|
|
)
|
|
srvManager.StartServices()
|
|
// Start FilterS
|
|
go startFilterService(filterSChan, cacheS, connManager,
|
|
cfg, dmService.GetDM())
|
|
|
|
err = initServiceManagerV1(internalServeManagerChan, srvManager, server, anz)
|
|
if err != nil {
|
|
log.Fatal(err)
|
|
}
|
|
|
|
// init internalRPCSet to share internal connections among the engine
|
|
engine.IntRPC = engine.NewRPCClientSet()
|
|
engine.IntRPC.AddInternalRPCClient(utils.AnalyzerSv1, internalAnalyzerSChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.APIerSv1, internalAPIerSv1Chan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.APIerSv2, internalAPIerSv2Chan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.AttributeSv1, internalAttributeSChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.CacheSv1, internalCacheSChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.CDRsV1, internalCDRServerChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.CDRsV2, internalCDRServerChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.ChargerSv1, internalChargerSChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.GuardianSv1, internalGuardianSChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.LoaderSv1, internalLoaderSChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.ResourceSv1, internalResourceSChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.Responder, internalResponderChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.SchedulerSv1, internalSchedulerSChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.SessionSv1, internalSessionSChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.StatSv1, internalStatSChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.RouteSv1, internalRouteSChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.ThresholdSv1, internalThresholdSChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.ServiceManagerV1, internalServeManagerChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.ConfigSv1, internalConfigChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.CoreSv1, internalCoreSv1Chan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.RALsV1, internalRALsChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.EeSv1, internalEEsChan)
|
|
engine.IntRPC.AddInternalRPCClient(utils.DispatcherSv1, internalDispatcherSChan)
|
|
|
|
err = initConfigSv1(internalConfigChan, server, anz)
|
|
if err != nil {
|
|
log.Fatal(err)
|
|
}
|
|
|
|
if *preload != utils.EmptyString {
|
|
runPreload(ldrs, internalLoaderSChan, shdChan)
|
|
}
|
|
|
|
// Serve rpc connections
|
|
go startRPC(server, internalResponderChan, internalCDRServerChan,
|
|
internalResourceSChan, internalStatSChan,
|
|
internalAttributeSChan, internalChargerSChan, internalThresholdSChan,
|
|
internalRouteSChan, internalSessionSChan, internalAnalyzerSChan,
|
|
internalDispatcherSChan, internalLoaderSChan, internalRALsChan,
|
|
internalCacheSChan, internalEEsChan, shdChan)
|
|
|
|
<-shdChan.Done()
|
|
shtdDone := make(chan struct{})
|
|
go func() {
|
|
shdWg.Wait()
|
|
close(shtdDone)
|
|
}()
|
|
select {
|
|
case <-shtdDone:
|
|
case <-time.After(cfg.CoreSCfg().ShutdownTimeout):
|
|
utils.Logger.Err(fmt.Sprintf("<%s> Failed to shutdown all subsystems in the given time",
|
|
utils.ServiceManager))
|
|
}
|
|
|
|
if *memProfDir != utils.EmptyString { // write last memory profiling
|
|
cores.MemProfFile(path.Join(*memProfDir, utils.MemProfFileCgr))
|
|
}
|
|
if *pidFile != utils.EmptyString {
|
|
if err := os.Remove(*pidFile); err != nil {
|
|
utils.Logger.Warning("Could not remove pid file: " + err.Error())
|
|
}
|
|
}
|
|
utils.Logger.Info("<CoreS> stopped all components. CGRateS shutdown!")
|
|
}
|