Skip to main content
Version: 4.3

❓ Frequently Asked Questions

Business

What are the advantages of Smart Monitor compared to Splunk?

Smart Monitor enables cross-system search across various storage systems, including external ones, without the need for data re-indexing. Unlike Splunk, Smart Monitor can be legally purchased within the Russian Federation.


What technical support options are available for Smart Monitor?

There are two technical support options for Smart Monitor: basic and extended.


Are there training courses available for working with Smart Monitor?

Currently, there are two training programs: Smart Monitor Developer and Smart Monitor Admin. For more details on the course content and scheduling, please visit the official website.


Is it possible to build a business offering services based on Smart Monitor?

Yes, Smart Monitor supports the MSSP (Managed Security Service Provider) model. Partners can build a service-based business using this model.


How can I learn more about the system in detail?

You can request a system demonstration by following this link.


Is there a place to explore Smart Monitor use cases?

Smart Monitor use cases are available via this link.


Where can I review the partner program?

Access to the program is granted on the portal after signing a partnership agreement.


Is there an open community on social media?

We have a Telegram group, join us!


Architecture

Is the system scalable?

Yes, the system can be deployed on a varying number of nodes of different capacities, organizing a single cluster, which ensures horizontal and vertical scalability.


How does the system behave when some nodes fail?

The degree of fault tolerance is determined by the system configuration - the number of nodes of different roles and the data replication settings.


How is Smart Monitor different from OpenSearch / Elasticsearch?

Smart Monitor is a universal platform for collecting and analyzing machine data, designed to solve problems in the field of information security, IT infrastructure monitoring, and business process analysis.

A distinctive feature of the platform is the ability to use various storage, such as OpenSearch, ElasticSearch, Hadoop, ClickHouse, and others. OpenSearch is just one of the possible storage options used in the platform as a base.


Is Smart Monitor an on-premise or cloud-like solution?

The system is installed and runs on servers within your own IT infrastructure (on-premise). Cloud deployment can be implemented as part of project work.


Can Smart Monitor be installed on virtual infrastructure?

Yes, the installation process is the same as on physical servers.


Can Smart Monitor be installed on Astra Linux?

Yes. A full list of supported operating systems is available here.

How to install additional dependencies (libraries) for a Python script?

Additional dependencies (libraries) for a Python script are installed on all nodes of the SM Data Storage and SM Master Node clusters where the node.attr.node_with_sme: true parameter is enabled. It should be noted that the server configuration typically specifies the path to the system Python interpreter, meaning system libraries are used.


Where should scripts be located for SME-RE to function correctly?

Scripts must be located in the directory specified by the cluster setting parameter sme.core.remote_script.base_path (default: "/app/opensearch/utils/scripts/"). This path corresponds to the script setting in the source configuration. However, you can specify any other directory for storing scripts, keeping in mind that:

  • The specified directory must be readable and executable by SME-RE.

  • When changing the path, ensure that all OpenSearch nodes running SME-RE have access to this directory.

Details for the verification GET request are available in the section via link.


Is it possible to place additional files required by the script in the script directory path?

Yes, you can place additional files in the directory specified by the cluster setting parameter sme.core.remote_script.base_path. However, you must ensure the script has the appropriate access rights to these files.


Data Collection

What data can be collected in SM?

Any machine data.


What collection methods can be used to receive events from the source?

Data can be collected using a variety of protocols and connectors. Frequently used ones include:

  • Agent-based collection, including:
    • Reading from files
    • Reading Windows logs
    • Reading Linux Audit logs
    • Remote script execution
  • Syslog / UDP
  • SNMP
  • HTTP
  • JDBC
  • Kafka

Is it mandatory to install an agent to collect data?

No, data can be sent to the collector host, for example, using Syslog, or collected by the collector itself by polling the source.


Can the Smart Beat agent run under a non-privileged account?

Yes, if it has sufficient privileges to collect the required information.


Is it possible to centrally install and manage agents for collecting data from hosts regardless of the OS?

Centralized agent installation on hosts can be performed using automation tools such as Ansible or AD policies. Centralized agent configuration management is carried out by the Smart Beat agent manager.


How is data collected, filtered, and sent to the repository from files?

Reading and filtering events from files can be performed either on the agent side with sending to the collector, or on the collector itself, from which the data is transferred to the repository.


Can SM collect and analyze server performance metrics?

Yes, performance metrics are a typical data source, and analytics can be performed using the query language.


Can ICS components be connected as sources? Are there correlation rule templates for ICS?

Yes, data can be collected from technological segment objects with or without agents. Correlation rules for ICS are not formalized into a separate solution module, but they can be developed as part of a project.


How to verify that SME-RE is configured and working correctly?

To ensure that SME-RE is configured and working correctly, check the following on all OpenSearch nodes where script execution is planned:

  • In the opensearch.yml file, the parameter node.attr.node_with_sme: true is set

  • The SME-RE service is started and running: systemctl status sme-re

  • Interpreters and script directories are configured correctly, and file and folder access permissions are set properly

  • The configuration file /app/opensearch/utils/sme-re/application.properties is present and correctly configured

On OpenSearch nodes where script execution is not required, the node.attr.node_with_sme parameter should either be absent or set to false.

On the OpenSearch cluster itself, ensure that:

  • In the OpenSearch Dev Tools console, the command GET _cluster/settings includes the sme.core section. If it is absent, it needs to be added

  • To verify nodes with SME-RE, execute the command GET _cat/nodeattrs and look for the node_with_sme parameter

Details are available in the section via link.


Is there a convenient way to transfer necessary files (scripts and databases) to Smart Monitor?

Currently, there is no such feature. As an alternative, you can mount a shared directory on all servers and specify it in the settings. Note that the OpenSearch user must have access to this directory.


Security

Is SAML supported?

Yes, the security module includes SAML support.


Is it possible to mask sensitive data when collecting it from the source?

Yes, masking can be performed using data processing tools on the agent or collector side.


Integration with External Systems

For searching data, does it have to be stored in the Smart Monitor base repository?

No, if the data is already in an external repository, there is no need to duplicate it in Smart Monitor, as the Search Anywhere technology can be used.


Which data repositories are compatible with Smart Monitor?

Full translation of search queries in the Smart Monitor Language, utilizing all the advantages of the storage, is available for OpenSearch, Elasticsearch, and ClickHouse. Partial translation is supported for any database connected via the JDBC protocol.


Is it possible to send alerts to external systems?

Yes. You can use the Email Action for email notifications or the Webhook Action for any API integration, such as sending SMS messages, Telegram notifications, or to other systems.


Is it possible to access external systems with APIs or databases in search queries?

Yes, using the Search Anywhere mechanism or the script command.


Search Queries

How to perform exact match filtering in a query?

To perform exact match filtering, you must consider the field type, which for text values can be text or keyword. The text type is intended for full-text search, while keyword is for exact matches. If the standard type for text fields (text.keyword) is used, you must use the keyword suffix when referencing the field (e.g., my_field.keyword). You can learn more about constructing search queries in this section.


Is it possible to compare values against multiple conditions simultaneously in a search query?

Yes, by using the logical operators OR and AND. When comparing a large number of values, you can use the format command, which generates a logical expression from a list of values.


How to pass data from a Smart Monitor search query to a called script?

Data obtained before the script command is called can be passed as follows:

  • If the script command is used in the request body and you need to pass the results obtained from previous commands, the script can access the path to the temporary file with results via the variable sys.argv[0].
Example:
source .smos_incident-ml-*  
| eval vect_fields="company,fields.user,fields.perpetrator,fields.position,fields.double_alerts_name",
target="target_status",
target_status = case(
status == "cancel", 1,
status == "close_incident", 2,
status == "double", 3,
status == "false", 4,
status == "improvements", 5,
status == "in_work_incident", 6,
status == "init", 7,
status == "pending", 8,
status == "sendtoism", 9
)
| fillnull company, vect_fields, target_status, target, fields.user, fields.perpetrator, fields.position, fields.double_alerts_name
| fields company, vect_fields, target_status, target, fields.user, fields.perpetrator, fields.position, fields.double_alerts_name, status
| script intr=python3 target_encoding.py
  • If you need to specify parameters when calling a script in a query, only static parameters are supported (not variables).
Example:
source users-*  
| script intr=python3 send_kafka.py ip-address-kafka.org 9092 topic_users

Incident Management

Is it possible to manually create incidents?

Yes, incidents can be created manually. Details are available in the article here.


Is it possible to change the values of main fields, such as criticality or assignee, for an existing incident?

Yes, the values of the main fields can be changed for any existing incident. Details are available in the article here.


Is bulk editing of incidents supported?

Yes, it is supported. You can select multiple incidents and perform bulk editing or use the incident aggregation mechanism and work with a group of incidents as a single entity.


Correlation Rules

Can I develop my own correlation rules?

Yes, you can use the Smart Monitor Language to perform various search queries on data from connected sources. Once you have formulated a search query that includes the necessary logic, you can independently create a correlation rule based on it.


Are there pre-installed correlation rules in Smart Monitor?

Correlation rules are provided as part of separate content modules.


How often are correlation rules updated in content modules?

Quarterly, with the release of a new version of Smart Monitor.


Is it possible to transfer developed correlation rules to another installation?

Yes, rules can be exported and imported to another installation via the web interface.


Is it possible to work with the job scheduler via the API?

Yes, the scheduler API allows you to perform basic operations on tasks, such as create, read, update, and delete. Details are available in the article here.


Inventory

What information can be used in the inventory?

Any existing index with data can be used as a source of information.


Is it possible to automate the collection of asset information in the inventory?

Yes, the Inventory module operates on data from the sources specified in the asset configuration. Therefore, changing or adding a new entry in the source will automatically be reflected in the asset inventory.