Setting up data collection from CTT Threat Feed provider
Installing the data collection script
On the VM where Logstash is installed, create a directory for storing scripts and related information:
mkdir /app/TI/RST/rst_script/
mkdir /app/TI/RST/RST_report/
Copy the threatfeed.py script file to the /app/TI/RST/rst_script/ directory. Create a .env file with the following content:
API_KEY = API Key
API_URL = Data collection endpoint
The API key can be obtained in your personal account. The key becomes available after purchasing a license.
Setting up Crontab
Using crontab settings, add the command to run the data collection script. The python interpreter included in the Smart Monitor installer is used to run the script. By default, it is located in the /app/logstash/utils/python/bin/python3 directory. The following flags are used in the script launch arguments:
- -r (--resource) – name of the Threat Feed API resource to download (
ip,domain,url,hashorall). By default, all available resources are loaded - -f (--format) – output file format:
csvorjson. JSON is used by default - -d (--date) – export date in
YYYYmmddformat (for example,20240812) or the keywordlatest, which forces the script to get the freshest data. By default,latestis taken - -o (--output) – path to the directory where data files will be saved. If not specified, the current directory is used
- -c (--config) – path to the
.envfile containing environment variables (API_URLandAPI_KEY) for connecting to Threat Feed API - --no-interaction – disable interactive mode; the script will work without confirmation requests from the user (does not output to console)
- --no-ssl – disable SSL certificate verification when connecting to Threat Feed API
Before running the script, you need to install all dependencies for the script from the requirements.txt file.
/app/logstash/utils/python/bin/pip3 install -r /app/TI/RST/rst_script/requirements.txt
0 1 * * * /app/logstash/utils/python/bin/python3 /app/TI/RST/rst_script/threatfeed.py -c /app/TI/RST/rst_script/.env -o /app/TI/RST/RST_report/ --no-interaction
Setting up Logstash
To properly prepare Logstash for collecting information, you need to create configurations in the /app/logstash/config/conf.d/ folder. The necessary files for creating a ready configuration are located in the logstash folder.
In the input folder, there is a file describing the data collection method. The file plugin is used to read data from a file. Make sure that Logstash has sufficient rights to read files from the specified folder.
In the filter folder, there is a handler for received events.
In the output folder, the process of sending processed data to Smart Monitor is described. You need to replace the address under the hosts key with the addresses of hot data and warm data nodes, specifying them separated by commas.
hosts => ["https://<HOT_NODE_IP>:9200", "https://<WARM_NODE_IP>:9200"]
All 3 files need to be combined into one in the following sequence: input, filter, output.
After creating the config, you need to specify the pipeline for its launch. To do this, add lines from the logstash/pipelines/pipelines.yml file to the very end of the /app/logstash/config/pipelines.yml file.
- pipeline.id: rst
path.config: "/app/logstash/config/conf.d/rst.conf"
Setting up Logrotate
Logrotate automates the process of managing log files. Create a ti_rotate.conf file in the /etc/logrotate.d directory with the following content:
/app/TI/RST/RST_report/*.json {
daily
rotate 1
missingok
nocompress
nocreate
nomail
noolddir
dateext
dateformat _%Y.%m.%d
extension .json
sharedscripts
postrotate
# Postrotate for searching and deleting files
find /app/TI/RST/RST_report/ -name 'domain_*.json' -type f | sort -r | awk 'NR>2 {print}' | xargs rm -f
find /app/TI/RST/RST_report/ -name 'ip_*.json' -type f | sort -r | awk 'NR>2 {print}' | xargs rm -f
find /app/TI/RST/RST_report/ -name 'url_*.json' -type f | sort -r | awk 'NR>2 {print}' | xargs rm -f
find /app/TI/RST/RST_report/ -name 'hash_*.json' -type f | sort -r | awk 'NR>2 {print}' | xargs rm -f
endscript
}