Golfreeze.packetlove.com: Life style of Golfreeze Canon400D Family kammtan.com Jazz Freebsd Unix Linux System Admin guitar Music
All about unix linux freebsd and FAQ for Packetlove.com Web hosting , Mail hosting , VoIP + IP PBX server => all application on unix knowledges by golfreeze => Topic started by: golfreeze on มกราคม 30, 2019, 05:18:07 pm
-
[security onion] kibana not start after soup on master and how to create new index and recovery dashboard configure.
https://discuss.elastic.co/t/error-kibana-server-is-not-ready-yet/156834/16
###upgrade on master
sudo soup
reboot
sudo soup
reboot
so-stop
so-start
###Then kibana start fail
Waiting for ElasticSearch...connected!
so-kibana: WARN[0008] Error while downloading remote metadata, using cached timestamp - this might not be the latest version available remotely
bf01513e1ad2977c150a82dda6e3cda7110db0ff2283fdc08ba1f804e5a1ac41
##### Log => /var/log/kibana/kibana.log #####
{"type":"log","@timestamp":"2019-01-28T06:41:25Z","tags":["info","migrations"],"pid":1,"message":"Creating index .kibana_2."}
{"type":"error","@timestamp":"2019-01-28T06:41:25Z","tags":["fatal","root"],"pid":1,"level":"fatal","error":{"message":"[cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];","name":"Error","stack":"[cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]; :: {\"path\":\"/.kibana_2\",\"query\":
####Fixed by run in master node
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/.kibana_2-*/_settings -d '{"index.blocks.read_only_allow_delete": null}'
##or all index
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/.monitoring-*/_settings -d '{"index.blocks.read_only_allow_delete": null}'
##then
curl -XDELETE http://localhost:9200/.kibana
curl -XDELETE http://localhost:9200/.kibana_1
curl -XDELETE http://localhost:9200/.kibana_2
##then
so-stop
so-start
and kibana could start [ok]
####Then Re-Create new index on kibana after delete .kibana index
In the UI, goto Management > Index Patterns and click Create Index Pattern.
The index pattern will be *:logstash-* and the time filter field name is "@timestamp".
####and running this script to re-create dashboard https://github.com/Security-Onion-Solutions/securityonion-elastic/blob/master/usr/sbin/so-elastic-configure-kibana-dashboards
cd /usr/sbin
vi so-elastic-configure-kibana-dashboards
##and running script
./bash so-elastic-configure-kibana-dashboards
then check dashboard come back again ! ;)
-
master# so-status
Status: securityonion
* sguil server [ OK ]
Status: HIDS
* ossec_agent (sguil) [ OK ]
Status: Elastic stack
* so-elasticsearch [ OK ]
* so-logstash [ OK ]
* so-kibana [ OK ]
* so-freqserver [ OK ]
* so-domainstats [ OK ]
* so-curator [ OK ]
* so-elastalert [ OK ]
master# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6ef42801833f securityonionsolutions/so-curator "/bin/bash" 31 minutes ago Up 31 minutes so-curator
2eb2d3619676 securityonionsolutions/so-elastalert "/opt/start-elastale…" 31 minutes ago Up 31 minutes so-elastalert
1b1e443cd090 securityonionsolutions/so-logstash "/usr/local/bin/dock…" 32 minutes ago Up 32 minutes 0.0.0.0:5044->5044/tcp, 0.0.0.0:6050-6053->6050-6053/tcp, 0.0.0.0:9600->9600/tcp so-logstash
50331f0b3e26 securityonionsolutions/so-kibana "/bin/sh -c /usr/loc…" 32 minutes ago Up 32 minutes 127.0.0.1:5601->5601/tcp so-kibana
ae55461bfd11 securityonionsolutions/so-elasticsearch "/bin/bash bin/es-do…" 32 minutes ago Up 32 minutes 127.0.0.1:9200->9200/tcp, 127.0.0.1:9300->9300/tcp so-elasticsearch
7972729ebe07 securityonionsolutions/so-domainstats "/bin/sh -c '/usr/bi…" 32 minutes ago Up 32 minutes 20000/tcp so-domainstats
01222a20f0cc securityonionsolutions/so-freqserver "/bin/sh -c '/usr/bi…" 32 minutes ago Up 32 minutes 10004/tcp so-freqserver
-
ทำการแก้ไข configure security onion เกี่ยวกับวัน ที่เก็บข้อมูล
vi /etc/nsm/securityonion.conf
# How many days would you like to keep in the Sguil database archive?
DAYSTOKEEP=30
# How many days worth of tables would you like to repair every day?
DAYSTOREPAIR=7
# At what percentage of disk usage should the NSM scripts warn you?
WARN_DISK_USAGE=80
# At what percentage of disk usage should the NSM scripts begin purging old data?
CRIT_DISK_USAGE=90
-
=== check status logstash
curl -XGET '10.2.8.123:9600/_node/stats/jvm?pretty'
=== test send curl post to logstash
curl -X GET "localhost:9600/?v"
curl -X GET "10.2.8.123:9600/?v"
=== test send stat to logstash server port 9600
curl -XGET '10.2.8.123:9600/_node/stats/events?pretty'
-
===step after dashboard done => https://docs.securityonion.net/en/2.3/logstash.html
ถ้าเจอว่า master node ไม่สามารถค้นหาได้ และเกิด error
"Error: Could not locate that index-pattern-field (id: @timestamp)"
# ls -la /opt/so/conf/logstash/pipelines/manager
total 20
drwxr-xr-x 2 logstash socore 4096 May 22 23:57 .
drwxr-xr-x 4 logstash socore 4096 May 29 23:42 ..
-rw-r--r-- 1 logstash socore 69 May 22 23:57 0009_input_beats.conf
-rw-r--r-- 1 logstash socore 1065 May 22 23:57 0010_input_hhbeats.conf
-rw-r--r-- 1 logstash socore 206 May 22 23:57 9999_output_redis.conf
# ls -la /opt/so/conf/logstash/pipelines/search
total 44
drwxr-xr-x 2 logstash socore 4096 May 29 23:42 .
drwxr-xr-x 4 logstash socore 4096 May 29 23:42 ..
-rw-r--r-- 1 logstash socore 180 May 29 23:42 0900_input_redis.conf
-rw-r--r-- 1 logstash socore 372 May 29 23:42 9000_output_zeek.conf
-rw-r--r-- 1 logstash socore 351 May 29 23:42 9002_output_import.conf
-rw-r--r-- 1 logstash socore 342 May 29 23:42 9034_output_syslog.conf
-rw-r--r-- 1 logstash socore 392 May 29 23:42 9100_output_osquery.conf
-rw-r--r-- 1 logstash socore 341 May 29 23:42 9400_output_suricata.conf
-rw-r--r-- 1 logstash socore 370 May 29 23:42 9500_output_beats.conf
-rw-r--r-- 1 logstash socore 338 May 29 23:42 9600_output_ossec.conf
-rw-r--r-- 1 logstash socore 359 May 29 23:42 9700_output_strelka.conf
2. ใส่ค่าเพิ่มใน /opt/so/saltstack/local/pillar/global.sls
logstash:
pipelines:
manager:
config:
- so/0009_input_beats.conf
- so/0010_input_hhbeats.conf
- so/9999_output_redis.conf.jinja
search:
config:
- so/0900_input_redis.conf.jinja
- so/9000_output_zeek.conf.jinja
- so/9002_output_import.conf.jinja
- so/9034_output_syslog.conf.jinja
- so/9100_output_osquery.conf.jinja
- so/9400_output_suricata.conf.jinja
- so/9500_output_beats.conf.jinja
- so/9600_output_ossec.conf.jinja
- so/9700_output_strelka.conf.jinja
3. restart logstash
# so-logstash-restart
4. restart elasticsearch
# so-elasticsearch-restart
====located for salt pipeline config
/opt/so/saltstack/default/salt/logstash/pipelines/config/so
===>after configure pipeline into global.sls then docker will start on choose pipeline and keep configure on path below.
/opt/so/conf/logstash/pipelines/search
===docker command check inspect
docker inspect so-logstash
เท่านี้พวก log ใน dashboard kibana ก็จะแสดงขึ้นมาครับผม