Updating Smart Monitor
This instruction describes the process of updating Smart Monitor from version 4.3.* to 5.0.*.
Information
Conditional designations:
SM_INSTALLER- directory where theSmart Monitorversion 5.0 installation package is unpackedUSER- system user with administrator rights, usuallyadminOPENSEARCH_HOME- OpenSearch home directory, usually/app/opensearch/OPENSEARCH_DATA- directory where indexed data is stored, usually/app/data/OPENSEARCH_IP- IP address of one of the OpenSearch cluster serversOSD_HOME- OpenSearch Dashboards home directory, usually/app/opensearch-dashboards/PATH_SSL- location of certificate, privateadminkey, and alsoca-cert, usually coincides with/app/opensearch/config/
The primary step for updating is determining the currently installed version of Smart Monitor. This can be done by viewing module versions on the main page or executing a command in the command line:
curl https://$OPENSEARCH_IP:9200/_cat/plugins -k -u $USER
After entering this command, you will need to enter the password for the $USER account. It is recommended to use the admin user.
A detailed list of new features can be viewed in the article What's New in Smart Monitor 5.0.
Let's consider the procedure for updating each component. The 5.0 installer needs to be unpacked into a directory, for example, /app/distr/.
Before starting work, it is strictly recommended to make a backup of the main configuration files and Security settings.
Recommended actions
It is recommended to create a directory, for example, /app/backup, where you should save:
-
configdirectory, usually$OPENSEARCH_HOME/configor$OSD_HOME/config -
systemdfiles, usually/etc/systemd/system/opensearch.serviceand/etc/systemd/system/opensearch-dashboards.service,/etc/systemd/system/sme-re.service -
file
/etc/sysctl.d/00-opensearch.conf -
copy of Security settings, this needs to be done once, for which you will need the certificate and private key of the
adminuser (the command below will create a directory with the current date with OpenSearch security settings)chmod +x $OS_HOME/plugins/opensearch-security/tools/securityadmin.sh
JAVA_HOME=$OS_HOME/jdk/ $OS_HOME/plugins/opensearch-security/tools/securityadmin.sh -backup /app/backup/security_$(date +%Y%m%d) \
-icl \
-nhnv \
-cacert $OS_HOME/config/ca-cert.pem \
-cert $OS_HOME/config/admin-cert.pem \
-key $OS_HOME/config/admin-key.pem
Disabling Inventory Processor
If the Inventory module is not installed, proceed to the next step.
Starting from version 5.0, the functionality of Inventory Processor is included in the Inventory module. It is recommended to make a backup copy of the Inventory Processor module and disable it in crontab.
Disabling Inventory Processor must be performed before carrying out the main update. These actions need to be performed once.
Usually Inventory Processor runs on a single instance on the first Smart Monitor Data Storage node with long-term data storage (routing mode cold) using crond according to schedule. You can view the list of crond jobs with the command below.
crontab -l
Comment out the execution of Inventory Processor and save the changes in crond.
Updating OpenSearch
The Smart Monitor 5.0 installer needs to be unpacked into a directory, for example, /app/distr/. Mark where you unpack the contents of the archive as $SM_INSTALLER.
SM_INSTALLER=/app/distr/sm_5.0
For clusters consisting of multiple nodes, it is recommended to disable allocation before updating through the developer console (Navigation Menu - System Settings - Developer Console) by executing the command:
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": "none"
}
}
You can do the same thing from the terminal with the following command:
curl -XPUT -k -u admin "https://$OPENSEARCH_IP:9200/_cluster/settings?pretty" -H "Content-Type: application/json" -d '{"persistent":{"cluster.routing.allocation.enable": "none"}}'
When updating cluster nodes, do not use allocation disabling with the update script. After updating all cluster nodes, enable allocation:
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": "all"
}
}
You can do the same thing from the terminal with the following command:
curl -XPUT -k -u admin "https://$OPENSEARCH_IP:9200/_cluster/settings?pretty" -H "Content-Type: application/json" -d '{"persistent":{"cluster.routing.allocation.enable": "all"}}'
Automatic mode
For the script to work, the following pre-installed packages are required:
curlzipunzip
If at the end you don't see the inscription that Smart Monitor has been updated, don't run the update script again, take a screenshot of where the script stopped and contact technical support.
The automatic update script allows automating actions during updating and is located at $SM_INSTALLER/opensearch/update.sh. When calling the script, you can specify the configuration file $SM_INSTALLER/opensearch/example_config_opensearch.yaml. The YAML file format is similar to the configuration file during installation.
The update script supports the following startup parameters:
-c, --config <path_to_config_file_yaml>- specify configuration file for updating-h, --help- displays help on available commands
Start updating with nodes without the master role, data nodes can connect to older versions of master nodes, but not vice versa.
To start the update, run the script:
$SM_INSTALLER/opensearch/update.sh
After running the script, it will automatically find the paths of the main directories:
OpenSearch Home Directory- OpenSearch installation directory, usually/app/opensearchOpenSearch Conf Directory- OpenSearch configuration files directory, usually/app/opensearch/config/OpenSearch Data Directory- data directory, usually/app/data/OpenSearch Logs Directory- logs directory, usually/app/logs/
The update script does not perform any actions with the data and logs directories, and the configuration files directory and systemd files will be saved to the temporary directory $SM_INSTALLER/opensearch/staging/.
If you run the script again, the staging directory will be cleared along with all copied configuration files directory and systemd files.
================================================================================
SMART MONITOR UPDATE SCRIPT - OPENSEARCH
================================================================================
Current working directory: /app/distr/sm_5.0/opensearch
Current name of install's archive: opensearch-2.18.0-linux-x64.tar.gz
New version OpenSearch: 2.18.0
================================================================================
-- STEP 1. INSTALLATION DIRECTORIES
opensearch.service file found. Will get necessary paths from there
Final Opensearch home directory: /app/opensearch
Final Opensearch conf directory: /app/opensearch/config
Final Opensearch data directory: /app/data/opensearch
Final Opensearch logs directory: /app/logs/opensearch
Is this correct? [y/n]:
After entering the directories, you need to confirm the automatically found data by pressing y, or enter your own directories manually by pressing n.
At the second step, you need to answer the question about allocation. If you enter y, the script will disable allocation before updating and enable it at the end of the script's work.
-- STEP 2. CONFIGURE ALLOCATION
Do you want to disable allocation during update? [y/N]: n
You don't want to disable allocation: n
Is this correct? [y/n]:
At the third step, you will need to enter the password for the admin user. The password will not be displayed when entering.
-- STEP 3. GET ADMIN PASSWORD
Enter password for user "admin":
If you enter an incorrect password, allocation will not be disabled even if you select it in the previous step, and information about the current node will not be displayed, but the update will not be interrupted.
Then preparatory actions will be performed before updating, before applying the update a question about continuing will be asked, until this moment no actions are performed in the system. Also, some information about the current node and the cluster as a whole will be displayed.
get current list of plugins
sm-core
sm-im
sm-inventory
sm-ism-action-clickhouse
sm-job-scheduler
sm-job-scheduler-actions-incident
sm-job-scheduler-actions-mitre
sm-knowledge-center
sm-mitre
sm-mssp
sm-rsm
sm-uba
sme
opensearch-security
Information about current node OpenSearch:
{
"name" : "smos-node-00",
"cluster_name" : "smos-cluster",
"cluster_uuid" : "yKPPDCHGSA6rHQT948jokQ",
"version" : {
"distribution" : "opensearch",
"number" : "2.18.0",
"build_type" : "tar",
"build_hash" : "99a9a81da366173b0c2b963b26ea92e15ef34547",
"build_date" : "2024-10-31T19:08:39.157471098Z",
"build_snapshot" : false,
"lucene_version" : "9.12.0",
"minimum_wire_compatibility_version" : "7.10.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "The OpenSearch Project: https://opensearch.org/"
}
!!! AT THIS POINT WE START TO MAKE CHANGES IN OPERATING SYSTEM !!!
Do you want to continue? [y/N]:
If you press Enter - the update will be interrupted, to continue you need to press y.
Upon successful completion of the update, you should see the inscription SMART MONITOR SUCCESSFULLY UPDATED!, preliminary information about the cluster and current node will be displayed.
-- STEP 10. PRINT INFORMATION
current state of cluster
{
"cluster_name" : "smos-cluster",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"discovered_master" : true,
"discovered_cluster_manager" : true,
"active_primary_shards" : 50,
"active_shards" : 50,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 15,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 76.92307692307693
}
current state clusters nodes
172.16.0.27 14 99 8 1.47 0.63 0.26 dim data,ingest,master * smos-node-00
Information about current node OpenSearch:
{
"name" : "smos-node-00",
"cluster_name" : "smos-cluster",
"cluster_uuid" : "5V2rIp1sRj-M-ANnGfF0cA",
"version" : {
"distribution" : "opensearch",
"number" : "2.18.0",
"build_type" : "tar",
"build_hash" : "99a9a81da366173b0c2b963b26ea92e15ef34547",
"build_date" : "2024-10-31T19:08:39.157471098Z",
"build_snapshot" : false,
"lucene_version" : "9.12.0",
"minimum_wire_compatibility_version" : "7.10.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "The OpenSearch Project: https://opensearch.org/"
}
The following plugins cannot be installed:
-- sm-job-scheduler-actions-incident
-- sm-job-scheduler-actions-mitre
================================================================================
-- SMART MONITOR SUCCESSFULLY UPDATED!
================================================================================
If for some reason the update script could not update some plugins, then it will additionally display information about these plugins at the end, as in the example above (the text The following plugins cannot be installed).
The update script takes into account the current list of installed plugins on OpenSearch nodes. If you need to install some plugin additionally, the action should be performed manually at the end of updating the node.
Updating OpenSearch Dashboards
For the script to work, the following pre-installed packages are required:
curlzipunzip
The automatic update script allows automating actions during updating and is located at $SM_INSTALLER/opensearch-dashboards/update.sh. When calling the script, you can specify the configuration file $SM_INSTALLER/opensearch-dashboards/example_config_dashboards.yaml. The YAML file format is similar to the configuration file during installation.
The update script supports the following startup parameters:
-c, --config <path_to_config_file_yaml>- specify configuration file for updating-h, --help- displays help on available commands
When working, the script will make a backup copy of the systemd service file, opensearch-dashboards.yml and the configuration directory to the temporary directory $SM_INSTALLER/opensearch-dashboards/staging/.
The update script does not perform any actions with the data and logs directories, and the configuration directory and systemd files will be saved to the temporary directory $SM_INSTALLER/opensearch-dashboards/staging/.
If you run the script again, the staging directory will be cleared.
To update, run the script:
$SM_INSTALLER/opensearch-dashboards/update.sh
The script will automatically determine the main paths of the current server to the following directories:
OpenSearch Dashboards Home Directory- OpenSearch Dashboards installation directory, usually/app/opensearch-dashboardsOpenSearch Dashboards Conf Directory- OpenSearch Dashboards configuration files directory, usually/app/opensearch-dashboards/config/OpenSearch Dashboards Data Directory- data directory, usually/app/data/OpenSearch Dashboards Logs Directory- logs directory, usually/app/logs/
================================================================================
SMART MONITOR INSTALL SCRIPT - OPENSEARCH DASHBOARDS
================================================================================
Current working directory: /opt/sm_5.0/opensearch-dashboards
Current name of install's archive: opensearch-dashboards-2.18.0-linux-x64.tar.gz
Current version of OpenSearch-Dashboards: 2.18.0
================================================================================
-- STEP 1. INSTALLATION DIRECTORIES
opensearch-dashboards.service file found. Will get necessary paths from there
Final Opensearch Dashboards home directory: /app/opensearch-dashboards
Final Opensearch Dashboards conf directory: /app/opensearch-dashboards/config
Final Opensearch Dashboards data directory: /app/data/opensearch-dashboards
Final Opensearch Dashboards logs directory: /app/logs/opensearch-dashboards
Is this correct? [y/n]:
After entering the directories, you need to confirm the entered data by pressing y, or enter your own directories manually by pressing n.
Then preparatory actions will be performed before updating, before applying the update a question about continuing will be asked, until this moment no actions affecting system operability are performed. Also, some information about the current node and the cluster as a whole will be displayed.
Current list of plugins:
-- smartMonitor
-- smartMonitorColumnChart
-- smartMonitorCyberSecurity
-- smartMonitorDrawio
-- smartMonitorHeatmapChart
-- smartMonitorHtmlChart
-- smartMonitorIncidentManager
-- smartMonitorInventory
-- smartMonitorKnowledgeCenter
-- smartMonitorLineChart
-- smartMonitorLookupManager
-- smartMonitorMitreAttack
-- smartMonitorPDFExport
-- smartMonitorPieChart
-- smartMonitorSingleValue
-- smartMonitorTable
-- smartMonitorUserBehaviorAnalytics
Current version of OpenSearch-Dashboards: 2.18.0
!!! AT THIS POINT WE START TO MAKE CHANGES IN OPERATING SYSTEM !!!
Do you want to continue? [y/N]:
Upon successful completion of the update script, the corresponding text SMART MONITOR DASHBOARDS SUCCESSFULLY UPDATED will be displayed.
Migrating Inventory module configurations
If the Inventory module is not installed, proceed to the next step.
In version 5.0, the Inventory Processor module was integrated into the Inventory module. To do this, open the developer console (Navigation Menu - System Settings - Developer Console) and execute the command:
POST _reindex
{
"source": {
"index": ".sm_inv_config"
},
"dest": {
"index": ".sm_inv_configs"
},
"script": {
"source": """
Map field(def f, def b) {
return [
"name": f["name"],
"display_name": f["display_name"],
"weight": f["weight"] != null ? f["weight"] : 1,
"base": b
];
}
List fields(def b, def a) {
def fl = [];
b.forEach(f -> fl.add(field(f, true)));
a.forEach(f -> fl.add(field(f, false)));
return fl;
}
Map meta(def id) {
return ["id": id];
}
Map mapping_rule(def d, def s) {
return ["dest_field": d, "source_field": s];
}
Map period(def f, def i) {
return i != null ? ["field": f, "interval": i] : ["field": f];
}
Map source(def s) {
def mapping_rules = [];
for (entry in s["mapping_rules"].entrySet()) {
mapping_rules.add(mapping_rule(entry.getKey(), entry.getValue()));
}
return [
"id": s["source"],
"name": s["source"],
"index": s["index"],
"mapping_rules": mapping_rules,
"period": period("@timestamp", s["time_window"]),
"not_mapping_agg_fields": false
];
}
List sources(def is) {
def sl = [];
is.forEach(s -> sl.add(source(s)));
return sl;
}
List aggs_keys(def ak) {
def sl = [];
ak.forEach(s -> {
sl.add([
"sources": s["sources"],
"fields": s["keys"]
]);
});
return sl;
}
Map schedule_params() {
return [
"with_index": true,
"case_insensitive": false,
"join_with_null_value": false,
"fast_only": false,
"bulk_changes": true
]
}
Map schedule() {
return [
"cron": [
"expression": "* * * * *",
"timezone": "GMT"
]
];
}
Date now = new Date();
Instant instant = Instant.ofEpochMilli(now.getTime());
ZonedDateTime zdt = ZonedDateTime.ofInstant(instant, ZoneId.of('Z'));
if (ctx._source.get("_meta") == null) {
ctx._source.put("_meta", [
"id": ctx._id,
"created": zdt.format(DateTimeFormatter.ISO_INSTANT),
"updated": zdt.format(DateTimeFormatter.ISO_INSTANT),
"from_system": false,
"type": "user"
]);
}
if (ctx._source.get("_meta") != null) {
if (ctx._source.get("_meta").get("tag_ids") == null) {
def arr = [];
ctx._source.get("_meta").put("tag_ids", arr);
}
if (ctx._source.get("_meta").get("from_system") == null) {
ctx._source.get("_meta").put("from_system", false);
}
if (ctx._source.get("_meta").get("type") == null) {
ctx._source.get("_meta").put("type", "user");
}
}
if (ctx._source.get("_permissions") == null) {
ctx._source.put("_permissions", [
"read": [
"roles": [],
"users": []
],
"write": [
"roles": [],
"users": []
],
"owner": "admin"
]);
}
ctx._source.put("fields", fields(ctx._source.get("base"), ctx._source.get("advanced")));
ctx._source.remove("base");
ctx._source.remove("advanced");
ctx._source.put("aggregation_keys", aggs_keys(ctx._source.get("key")));
ctx._source.remove("key");
ctx._source.put("name", ctx._source.get("inventory_name"));
ctx._source.remove("inventory_name");
ctx._source.put("output_index", ctx._source.get("output"));
ctx._source.put("output_replica", ctx._source.get("output") + "_replica");
ctx._source.remove("output");
ctx._source.put("priorities", ctx._source.get("priorities"));
ctx._source.put("sources", sources(ctx._source.get("inventory_sources")));
ctx._source.remove("inventory_sources");
if (ctx._source.get("category") != null) {
ctx._source.put("category", ctx._source.get("category"));
}
ctx._source.put("asset_name", ctx._source.get("asset_name"));
if (ctx._source.get("ttl") != null) {
ctx._source.put("ttl", ctx._source.get("ttl"));
if (ctx._source.get("ttl") == "") {
ctx._source.remove("ttl");
}
}
ctx._source.put("schedule_params", schedule_params());
ctx._source.put("enabled", false);
ctx._source.put("schedule", schedule());
""",
"lang": "painless"
}
}
Migrating incident and Inventory connections
If the Inventory and Incident Manager modules are not installed, proceed to the next step.
In version 5.0, changes were made to the connections between the Inventory and Incident Manager modules. The installer includes a utility for migrating connections. The utility is located in the directory $SM_INSTALLER/utils/migrations_4.3-5.0/incident_inventory_connection/ and works on python. The main requirements of the utility:
- Python 3.8+
- plugin opensearch-py
The rest of the packages are included in the standard Python installation, a more detailed list of packages:
- certifi==2023.7.22
- charset-normalizer==3.3.2
- idna==3.4
- opensearch-py==2.3.2
- python-dateutil==2.8.2
- requests==2.31.0
- six==1.16.0
- urllib3==2.0.7
Python 3.8 with the required set of packages is included in the Smart Monitor 5.0 installer.
Configuration file
Before running the utility, configure the parameters in the file $SM_INSTALLER/utils/migrations_4.3-5.0/incident_inventory_connection/default.ini. An example configuration file is shown below:
[server]
host = 127.0.0.1
port = 9200
[user]
name = admin
pass = password
In the server.host parameter, you need to specify the IP address of any OpenSearch node, it is recommended to specify a node with the data role and routing_mode: hot attribute. If you omit the user.pass parameter, the utility will request the password from the AD in interactive mode.
Utility startup parameters
The utility has the following startup parameters:
-c, --config- configuration file (optional). Default -./default.ini-h, --help- display help
Running the utility
To perform migrations, run the utility with the command:
$SM_INSTALLER/utils/python/bin/python3 $SM_INSTALLER/utils/migrations_4.3-5.0/incident_inventory_connection/main.py -c $SM_INSTALLER/utils/migrations_4.3-5.0/incident_inventory_connection/default.ini
Configuring the Inventory module
If the Inventory module is not installed, proceed to the next step.
To integrate the Inventory module with Smart Monitor, you need to add the key inv.os.pass to the Smart Monitor Data Storage and Smart Monitor Master Node password store, if integration with PostgreSQL is planned, then additionally add the key inv.pg.pass. The instruction shows how to add only one key to the password store.
Adding a password to the keystore using a request
Open the menu Navigation Menu - System Settings - Developer Console and execute the following request:
GET _core/keystore
The API allows changing keystore only for certain cluster nodes, for example, with a mask by node name. You can read more about the keystore management API in the corresponding article.
The contents of the keystore stores on all cluster nodes will be displayed. To add a key to all cluster nodes, edit and execute the request, specify the password from the admin user:
POST _core/keystore/inv.os.pass
{
"value" : "<PASSWORD_USER>"
}
Adding a password to the keystore manually
On OpenSearch nodes with the Inventory plugin installed, you need to add the key inv.os.pass to the keystore. This can be done with the following command:
sudo -u opensearch $OPENSEARCH_HOME/bin/opensearch-keystore add inv.os.pass
When executing the script, it will ask you to enter the key value, enter the password of the admin user. After executing the command, restart the OpenSearch node.
Initializing the Inventory module
To initialize the module, you need to go to System Settings - Module Settings - Inventory - Initialization:
Changing the Inventory menu
If the Inventory module is not installed, proceed to the next step.
In version 5.0, the system name Inventory was changed. To view the new functionality, you need to edit the navigation menu item, for this open the menu Navigation Menu - System Settings - Module Settings - Main - Menu Settings. Find the Inventory menu module, expand it, inside expand the Assets section. Change the System name field to configs/list.
Click the Save Changes button.
Identifier format settings in Incident Manager
If the Incident Manager module is not installed, proceed to the next step.
In Smart Monitor 5.0, the identifier format in Incident Manager has changed. First, check for the presence of new settings. To do this, open the developer console (Navigation Menu - System Settings - Developer Console) and execute the command:
GET _core/im_settings/incident-id-lock
The result of the request may be a document or a message about the absence of a document as in the example below:
"""{"message":"Config with id: 'incident-id-lock' not found."}"""
If the result of executing the previous command was a message about the absence of a document, then execute the following request:
If the previous request returned a document, then it is not necessary to execute the request below.
POST _core/im_settings/incident-id-lock
{
"lock": false,
"current_incident_id": 0
}
Adding multiline comment
If the Incident Manager module is not installed, proceed to the next step.
In version 5.0, support for multiline comments was added to Incident Manager. To add functionality, open the developer console (Navigation Menu - System Settings - Developer Console) and execute the command:
PUT _core/im_settings/incident-manager-settings
{
"editFields" : {
"comment" :
{ "type" : "textarea" }
}
}
Adding Sigma rules to the menu
If the Cyber Security module is not installed, proceed to the next step.
In version 5.0, Sigma rules were added to the Cyber Security module. To view the new functionality, you need to create a navigation menu item, for this open the menu Navigation Menu - System Settings - Module Settings - Main - Menu Settings. Click the Add Module button.
Fill in the fields for the module as follows:
| Field name | Content |
|---|---|
Item type | Group |
System name | sigma-rules |
Title | Sigma Rules |
Enable display | Yes, the flag must be enabled |
Inside the Sigma Rules module, click the Add Section button.
Fill in the fields for the section as follows:
| Field name | Content |
|---|---|
Item type | Page |
System name | |
Title | Rules List |
Enable display | Yes, the flag must be enabled |
Click the Save Changes button. Configure permissions for user groups if necessary.
The menu item can be added through a JSON structure. To do this, open the menu Navigation Menu - System Settings - Module Settings - Main - Menu Settings, open the JSON structure tab, add the following fragment to the top list through a comma:
{
"itemType": "group",
"name": "sigma-rules",
"_permissions": {
"owner": "admin",
"read": {
"roles": [],
"users": []
},
"write": {
"roles": [],
"users": []
}
},
"id": "iff6f40d1-e210-11ef-b57c-6bad33908cd9",
"title": "Sigma Rules",
"enabled": true,
"sections": [
{
"itemType": "page",
"name": "" ,
"_permissions": {
"owner": "admin",
"read": {
"roles": [],
"users": []
},
"write": {
"roles": [],
"users": []
}
},
"id": "i0ce9b921-e211-11ef-b57c-6bad33908cd9",
"title": "Rules List",
"enabled": true
}
]
}
Sigma rules need to be imported, download the archive from the official site, the recommended file is sigma_all_rules.zip.
Import via web interface
Use the instruction on initializing Sigma rules.
Import via terminal
On all nodes with the sm-sigma module installed, create the directory $OS_PATH/utils/sigma, usually /app/opensearch/utils/sigma. Download the sigma_all_rules.zip file to the directories.
mkdir -p /app/opensearch/utils/sigma
cp ./sigma_all_rules.zip /app/opensearch/utils/sigma/
chown -R opensearch:opensearch /app/opensearch/utils/sigma
Disable allocation according to the instruction, restart the cluster nodes and enable allocation again.
Open the developer console (Navigation Menu - System Settings - Developer Console) and execute the command:
POST _core/sigma/rule
{
"zipped_package_filename": "sigma_all_rules.zip"
}
Initializing the MITRE ATT&CK matrix
If the MITRE ATT&CK module is not installed, proceed to the next step.
After updating, you need to re-initialize MITRE ATT&CK, use the instruction in the corresponding section.
Adding notes
In version 5.0, support for a list of notes was added to the Knowledge Center component. To view the new functionality, you need to create a navigation menu item, for this open the menu Navigation Menu - System Settings - Module Settings - Main - Menu Settings, find Knowledge Center and expand it, then click the Add Section button.
Fill in the fields as follows:
| Field name | Content |
|---|---|
Item type | Page |
System name | notebooks/list |
Title | Notes List |
Enable display | Yes, the flag must be enabled |
Click the Save Changes button. Configure permissions for user groups if necessary.
The menu item can be added through a JSON structure. To do this, open the menu Navigation Menu - System Settings - Module Settings - Main - Menu Settings, open the JSON structure tab, find Knowledge Center and in the sections block add the following fragment through a comma:
{
"itemType": "page",
"name": "notebooks/list",
"_permissions": {
"owner": "admin",
"read": {
"roles": [],
"users": []
},
"write": {
"roles": [],
"users": []
}
},
"id": "ic1c91b81-145a-11f0-82e8-c104a1233526",
"title": "Notes List",
"enabled": true
}
Click the Save Changes button.
To use the functionality of using files in notes, use the setup article.
Adding RSM 2.0
When updating Smart Monitor from version 5.0.0 to 5.0.1, for configurations where Resource-Service Model v2.0 is used, execute the following request:
POST .sm_rsm_v2_metrics/_update_by_query
{
"query": {
"match_all": {}
},
"script": {
"source": "ctx._source.enabled = true",
"lang": "painless"
}
}
In version 5.0, the Resource-Service Model version 2.0 was added. To view the new functionality, you need to create a navigation menu item, for this open the menu Navigation Menu - System Settings - Module Settings - Main - Menu Settings. Click the Add Module button.
Fill in the fields for the module as follows:
| Field name | Content |
|---|---|
Item type | Group |
System name | rsm-v2 |
Title | RSM 2.0 |
Enable display | Yes, the flag must be enabled |
Inside the RSM 2.0 module, click the Add Section button.
Fill in the fields as follows:
| Field name | Content |
|---|---|
Item type | Page |
System name | tree |
Title | RSM Tree |
Enable display | Yes, the flag must be enabled |
Inside the RSM 2.0 module, click the Add Section button.
Fill in the fields as follows:
| Field name | Content |
|---|---|
Item type | Page |
System name | layers |
Title | Layers |
Enable display | Yes, the flag must be enabled |
Click the Save Changes button. Configure permissions for user groups if necessary.
The menu item can be added through a JSON structure. To do this, open the menu Navigation Menu - System Settings - Module Settings - Main - Menu Settings, open the JSON structure tab, add the following fragment to the top list through a comma:
{
"itemType": "group",
"name": "rsm-v2",
"_permissions": {
"owner": "admin",
"read": {
"roles": [],
"users": []
},
"write": {
"roles": [],
"users": []
}
},
"id": "i166ef2a1-13ba-11f0-a668-6560e9eb14d5",
"title": "RSM 2.0",
"enabled": true,
"sections": [
{
"itemType": "page",
"name": "tree",
"_permissions": {
"owner": "admin",
"read": {
"roles": [],
"users": []
},
"write": {
"roles": [],
"users": []
}
},
"id": "i46e0d571-13ba-11f0-a668-6560e9eb14d5",
"title": "RSM Tree",
"enabled": true
},
{
"itemType": "page",
"name": "layers",
"_permissions": {
"owner": "admin",
"read": {
"roles": [],
"users": []
},
"write": {
"roles": [],
"users": []
}
},
"id": "i5de13351-13ba-11f0-a668-6560e9eb14d5",
"title": "Layers",
"enabled": true
}
]
}
Click the Save Changes button.
To start using RSM 2.0, you need to initialize it and perform migration.