This commit is contained in:
CaffeineFueled 2024-10-24 01:29:57 +02:00
commit ba9def34c6
Signed by: CaffeineFueled
GPG key ID: 739D3C8D00944004
15 changed files with 1707 additions and 0 deletions

21
LICENSE Normal file
View file

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2024 CF
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

63
README.md Normal file
View file

@ -0,0 +1,63 @@
# Notes
**CURRENTY NOT WORKING**
`git clone https://git.ittavern.com/CaffeineFueled/podman-grafana-loki-elastic.git`
`cd podman-grafana-loki-elastic`
`podman kube play grafana-stack.yaml`
Taking it down
`podman kube down grafana-stack.yaml`
> Note: not ready for production!
# Configuration
## Endpoint
**Grafana**
`localhost:3000` # web interface
**Loki**
`localhost:3100` # default port for client communication
**Elastic**
`localhost:9200` # default HTTP port for client communication + REST API
`localhost:9300` # default transport port for node-to-node communication
## Auth
Open Grafana, default creds are `admin`/`admin`.
## Data Source
Add Loki as data source `localhost:3100` to display them in grafana
# Changelog
from initial project
- Removed Tempo/Mimir
- Loki Config `loki-local-config.yaml`
- `store: boltdb-shipper` > `store: tsdb`
- `schema: v11` > `schema: v13`
# Links
[Initial Article](https://grafana.com/blog/2022/09/08/how-to-deploy-the-grafana-stack-using-podman/)
[Initial Repo](https://github.com/CastawayEGR/grafana-stack-podman)
[Elastic Default Configruation Files](https://github.com/elastic/elasticsearch/tree/main/distribution/src/config)
# TODO
- [ ] Authentication for Loki
- [ ] Authentication for Elastic
- [ ] Docs for '[production settings](https://www.elastic.co/guide/en/elasticsearch/reference/7.5/system-config.html)' in Elastic
- [ ] what ports are really needed?

1
elastic/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
elasticsearch.keystore

4
elastic/data/.gitignore vendored Normal file
View file

@ -0,0 +1,4 @@
*
*/
!.gitignore
!.gitkeep

0
elastic/data/.gitkeep Normal file
View file

45
elastic/elasticsearch.yml Normal file
View file

@ -0,0 +1,45 @@
# Do not use this configuration in production.
# It is for demonstration purposes only.
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
# PATH CONFIG
# https://www.elastic.co/guide/en/elasticsearch/reference/7.5/path-settings.html
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 23-10-2024 21:52:59
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: false
#keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: false
#verification_mode: certificate
#keystore.path: certs/transport.p12
#truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["grafana-stack-pod"]
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

87
elastic/jvm.options Normal file
View file

@ -0,0 +1,87 @@
################################################################
##
## JVM configuration
##
################################################################
##
## WARNING: DO NOT EDIT THIS FILE. If you want to override the
## JVM options in this file, or set any additional options, you
## should create one or more files in the jvm.options.d
## directory containing your adjustments.
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/@project.minor.version@/jvm-options.html
## for more information.
##
################################################################
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## The heap size is automatically configured by Elasticsearch
## based on the available memory in your system and the roles
## each node is configured to fulfill. If specifying heap is
## required, it should be done through a file in jvm.options.d,
## which should be named with .options suffix, and the min and
## max should be set to the same value. For example, to set the
## heap to 4 GB, create a new file in the jvm.options.d
## directory containing these lines:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/@project.minor.version@/heap-size.html
## for more information
##
################################################################
################################################################
## Expert settings
################################################################
##
## All settings below here are considered expert settings. Do
## not adjust them unless you understand what you are doing. Do
## not edit them in this file; instead, create a new file in the
## jvm.options.d directory containing your adjustments.
##
################################################################
-XX:+UseG1GC
## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}
# Leverages accelerated vector hardware instructions; removing this may
# result in less optimal vector performance
20-:--add-modules=jdk.incubator.vector
# Required to workaround performance issue in JDK 23, https://github.com/elastic/elasticsearch/issues/113030
23:-XX:CompileCommand=dontinline,java/lang/invoke/MethodHandle.setAsTypeCache
23:-XX:CompileCommand=dontinline,java/lang/invoke/MethodHandle.asTypeUncached
# Lucene 10: apply MADV_NORMAL advice to enable more aggressive readahead
-Dorg.apache.lucene.store.defaultReadAdvice=normal
## heap dumps
# generate a heap dump when an allocation from the Java heap fails; heap dumps
# are created in the working directory of the JVM unless an alternative path is
# specified
-XX:+HeapDumpOnOutOfMemoryError
# exit right after heap dump on out of memory error
-XX:+ExitOnOutOfMemoryError
# specify an alternative path for heap dumps; ensure the directory exists and
# has sufficient space
#@heap.dump.path@
# specify an alternative path for JVM fatal error logs
#@error.file@
## GC logging
-Xlog:gc*,gc+age=trace,safepoint:file=@loggc@:utctime,level,pid,tags:filecount=32,filesize=64m

132
elastic/log4j2.properties Normal file
View file

@ -0,0 +1,132 @@
status = error
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%consoleException%n
######## Server JSON ############################
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json
appender.rolling.layout.type = ECSJsonLayout
appender.rolling.layout.dataset = elasticsearch.server
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 128MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB
################################################
######## Server - old style pattern ###########
appender.rolling_old.type = RollingFile
appender.rolling_old.name = rolling_old
appender.rolling_old.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling_old.layout.type = PatternLayout
appender.rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
appender.rolling_old.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling_old.policies.type = Policies
appender.rolling_old.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling_old.policies.time.interval = 1
appender.rolling_old.policies.time.modulate = true
appender.rolling_old.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling_old.policies.size.size = 128MB
appender.rolling_old.strategy.type = DefaultRolloverStrategy
appender.rolling_old.strategy.fileIndex = nomax
appender.rolling_old.strategy.action.type = Delete
appender.rolling_old.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling_old.strategy.action.condition.type = IfFileName
appender.rolling_old.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling_old.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling_old.strategy.action.condition.nested_condition.exceeds = 2GB
################################################
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling
rootLogger.appenderRef.rolling_old.ref = rolling_old
######## Deprecation JSON #######################
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.json
appender.deprecation_rolling.layout.type = ECSJsonLayout
# Intentionally follows a different pattern to above
appender.deprecation_rolling.layout.dataset = deprecation.elasticsearch
appender.deprecation_rolling.filter.rate_limit.type = RateLimitingFilter
appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.json.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4
appender.header_warning.type = HeaderWarningAppender
appender.header_warning.name = header_warning
#################################################
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = WARN
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.appenderRef.header_warning.ref = header_warning
logger.deprecation.additivity = false
######## Search slowlog JSON ####################
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
.cluster_name}_index_search_slowlog.json
appender.index_search_slowlog_rolling.layout.type = ECSJsonLayout
appender.index_search_slowlog_rolling.layout.dataset = elasticsearch.index_search_slowlog
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
.cluster_name}_index_search_slowlog-%i.json.gz
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.size.size = 1GB
appender.index_search_slowlog_rolling.strategy.type = DefaultRolloverStrategy
appender.index_search_slowlog_rolling.strategy.max = 4
#################################################
#################################################
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false
######## Indexing slowlog JSON ##################
appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
_index_indexing_slowlog.json
appender.index_indexing_slowlog_rolling.layout.type = ECSJsonLayout
appender.index_indexing_slowlog_rolling.layout.dataset = elasticsearch.index_indexing_slowlog
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
_index_indexing_slowlog-%i.json.gz
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.size.size = 1GB
appender.index_indexing_slowlog_rolling.strategy.type = DefaultRolloverStrategy
appender.index_indexing_slowlog_rolling.strategy.max = 4
#################################################
logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false

95
grafana-stack.yaml Normal file
View file

@ -0,0 +1,95 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-stack
labels:
app: grafana-stack
spec:
replicas: 1
selector:
matchLabels:
app: grafana-stack
template:
metadata:
labels:
app: grafana-stack
spec:
containers:
- name: grafana
image: docker.io/grafana/grafana:latest
ports:
- containerPort: 3000
hostPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /etc/grafana/grafana.ini:U
name: grafana-config
- mountPath: /var/lib/grafana:U
name: grafana
- name: loki
image: docker.io/grafana/loki:latest
args:
- --config.file=/mnt/config/loki-local-config.yaml
securityContext:
runAsUser: 1000
ports:
- containerPort: 3100
hostPort: 3100
protocol: TCP
- containerPort: 9096
hostPort: 9096
protocol: TCP
volumeMounts:
- mountPath: /tmp/loki:U
name: loki
- mountPath: /mnt/config/loki-local-config.yaml:U
name: loki-config
- name: elastic
image: docker.elastic.co/elasticsearch/elasticsearch:8.15.3
#args:
#- --config.file=/usr/share/elasticsearch/config/elasticsearch.yaml
securityContext:
runAsUser: 1000
env:
- name: ELASTIC_PASSWORD
value: "hehe"
ports:
- containerPort: 9200
hostPort: 9200
protocol: TCP
- containerPort: 9300
hostPort: 9300
protocol: TCP
- containerPort: 9096
hostPort: 9096
protocol: TCP
volumeMounts:
- mountPath: /var:U
name: elastic
- mountPath: /usr/share/elasticsearch/config:U
name: elastic-config
volumes:
- hostPath:
path: ./grafana/data
type: Directory
name: grafana
- hostPath:
path: ./grafana/grafana.ini
type: File
name: grafana-config
- hostPath:
path: ./loki/data
type: Directory
name: loki
- hostPath:
path: ./loki/loki-local-config.yaml
type: File
name: loki-config
- hostPath:
path: ./elastic/data
type: Directory
name: elastic
- hostPath:
path: ./elastic/
type: Directory
name: elastic-config

4
grafana/data/.gitignore vendored Normal file
View file

@ -0,0 +1,4 @@
*
*/
!.gitignore
!.gitkeep

0
grafana/data/.gitkeep Normal file
View file

1201
grafana/grafana.ini Normal file

File diff suppressed because it is too large Load diff

4
loki/data/.gitignore vendored Normal file
View file

@ -0,0 +1,4 @@
*
*/
!.gitignore
!.gitkeep

0
loki/data/.gitkeep Normal file
View file

View file

@ -0,0 +1,50 @@
# Do not use this configuration in production.
# It is for demonstration purposes only.
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
schema_config:
configs:
- from: 2024-10-20
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
ruler:
enable_api: true
storage:
type: local
local:
directory: /tmp/loki/rules
rule_path: /tmp/loki/rules-temp
# By default, Loki will send anonymous, but uniquely-identifiable usage and configuration
# analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/
#
# Statistics help us better understand how Loki is used, and they show us performance
# levels for most users. This helps us prioritize features and documentation.
# For more information on what's sent, look at
# https://github.com/grafana/loki/blob/main/pkg/usagestats/stats.go
# Refer to the buildReport method to see what goes into a report.
#
# If you would like to disable reporting, uncomment the following lines:
#analytics:
# reporting_enabled: false