build: add Docker functionality and documentation

This commit is contained in:
Shyam Sunder 2018-07-11 23:30:13 -04:00 committed by rr-
parent 9730aa5c05
commit 6a6c4dc822
13 changed files with 590 additions and 212 deletions

1
.gitignore vendored
View file

@ -2,3 +2,4 @@ config.yaml
*/*_modules/
.coverage
.cache
docker-compose.yml

212
INSTALL-OLD.md Normal file
View file

@ -0,0 +1,212 @@
**This installation guide is deprecated and might be out
of date! It is recommended that you deploy using
[Docker](https://github.com/rr-/szurubooru/blob/master/INSTALL.md)
instead.**
This guide assumes Arch Linux. Although exact instructions for other
distributions are different, the steps stay roughly the same.
### Installing hard dependencies
```console
user@host:~$ sudo pacman -S postgresql
user@host:~$ sudo pacman -S python
user@host:~$ sudo pacman -S python-pip
user@host:~$ sudo pacman -S ffmpeg
user@host:~$ sudo pacman -S npm
user@host:~$ sudo pacman -S elasticsearch
user@host:~$ sudo pip install virtualenv
user@host:~$ python --version
Python 3.5.1
```
The reason `ffmpeg` is used over, say, `ImageMagick` or even `PIL` is because of
Flash and video posts.
### Setting up a database
First, basic `postgres` configuration:
```console
user@host:~$ sudo -i -u postgres initdb --locale en_US.UTF-8 -E UTF8 -D /var/lib/postgres/data
user@host:~$ sudo systemctl start postgresql
user@host:~$ sudo systemctl enable postgresql
```
Then creating a database:
```console
user@host:~$ sudo -i -u postgres createuser --interactive
Enter name of role to add: szuru
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n
user@host:~$ sudo -i -u postgres createdb szuru
user@host:~$ sudo -i -u postgres psql -c "ALTER USER szuru PASSWORD 'dog';"
```
### Setting up elasticsearch
```console
user@host:~$ sudo systemctl start elasticsearch
user@host:~$ sudo systemctl enable elasticsearch
```
### Preparing environment
Getting `szurubooru`:
```console
user@host:~$ git clone https://github.com/rr-/szurubooru.git szuru
user@host:~$ cd szuru
```
Installing frontend dependencies:
```console
user@host:szuru$ cd client
user@host:szuru/client$ npm install
```
`npm` sandboxes dependencies by default, i.e. installs them to
`./node_modules`. This is good, because it avoids polluting the system with the
project's dependencies. To make Python work the same way, we'll use
`virtualenv`. Installing backend dependencies with `virtualenv` looks like
this:
```console
user@host:szuru/client$ cd ../server
user@host:szuru/server$ virtualenv python_modules # consistent with node_modules
user@host:szuru/server$ source python_modules/bin/activate # enters the sandbox
(python_modules) user@host:szuru/server$ pip install -r requirements.txt # installs the dependencies
```
### Preparing `szurubooru` for first run
1. Compile the frontend:
```console
user@host:szuru$ cd client
user@host:szuru/client$ node build.js
```
You can include the flags `--no-transpile` to disable the JavaScript
transpiler, which provides compatibility with older browsers, and
`--debug` to generate JS source mappings.
2. Configure things:
```console
user@host:szuru/client$ cd ..
user@host:szuru$ mv server/config.yaml.dist .
user@host:szuru$ cp config.yaml.dist config.yaml
user@host:szuru$ vim config.yaml
```
Pay extra attention to these fields:
- data directory,
- data URL,
- database,
- the `smtp` section.
3. Upgrade the database:
```console
user@host:szuru/client$ cd ../server
user@host:szuru/server$ source python_modules/bin/activate
(python_modules) user@host:szuru/server$ alembic upgrade head
```
`alembic` should have been installed during installation of `szurubooru`'s
dependencies.
4. Run the tests:
```console
(python_modules) user@host:szuru/server$ pytest
```
It is recommended to rebuild the frontend after each change to configuration.
### Wiring `szurubooru` to the web server
`szurubooru` is divided into two parts: public static files, and the API. It
tries not to impose any networking configurations on the user, so it is the
user's responsibility to wire these to their web server.
The static files are located in the `client/public/data` directory and are
meant to be exposed directly to the end users.
The API should be exposed using WSGI server such as `waitress`, `gunicorn` or
similar. Other configurations might be possible but I didn't pursue them.
API calls are made to the relative URL `/api/`. Your HTTP server should be
configured to proxy this URL format to the WSGI server. Some users may prefer
to use a dedicated reverse proxy for this, to incorporate additional features
such as load balancing and SSL.
Note that the API URL in the virtual host configuration needs to be the same as
the one in the `config.yaml`, so that client knows how to access the backend!
#### Example
In this example:
- The booru is accessed from `http://example.com/`
- The API is accessed from `http://example.com/api`
- The API server listens locally on port 6666, and is proxied by nginx
- The static files are served from `/srv/www/booru/client/public/data`
**nginx configuration**:
```nginx
server {
listen 80;
server_name example.com;
location ~ ^/api$ {
return 302 /api/;
}
location ~ ^/api/(.*)$ {
if ($request_uri ~* "/api/(.*)") { # preserve PATH_INFO as-is
proxy_pass http://127.0.0.1:6666/$1;
}
}
location / {
root /srv/www/booru/client/public;
try_files $uri /index.htm;
}
}
```
**`config.yaml`**:
```yaml
data_url: 'http://example.com/data/'
data_dir: '/srv/www/booru/client/public/data'
```
To run the server using `waitress`:
```console
user@host:szuru/server$ source python_modules/bin/activate
(python_modules) user@host:szuru/server$ pip install waitress
(python_modules) user@host:szuru/server$ waitress-serve --port 6666 szurubooru.facade:app
```
or `gunicorn`:
```console
user@host:szuru/server$ source python_modules/bin/activate
(python_modules) user@host:szuru/server$ pip install gunicorn
(python_modules) user@host:szuru/server$ gunicorn szurubooru.facade:app -b 127.0.0.1:6666
```

View file

@ -1,206 +1,64 @@
This guide assumes Arch Linux. Although exact instructions for other
distributions are different, the steps stay roughly the same.
This assumes that you have Docker and Docker Compose already installed.
### Installing hard dependencies
### Prepare things
```console
user@host:~$ sudo pacman -S postgresql
user@host:~$ sudo pacman -S python
user@host:~$ sudo pacman -S python-pip
user@host:~$ sudo pacman -S ffmpeg
user@host:~$ sudo pacman -S npm
user@host:~$ sudo pacman -S elasticsearch
user@host:~$ sudo pip install virtualenv
user@host:~$ python --version
Python 3.5.1
```
The reason `ffmpeg` is used over, say, `ImageMagick` or even `PIL` is because of
Flash and video posts.
### Setting up a database
First, basic `postgres` configuration:
```console
user@host:~$ sudo -i -u postgres initdb --locale en_US.UTF-8 -E UTF8 -D /var/lib/postgres/data
user@host:~$ sudo systemctl start postgresql
user@host:~$ sudo systemctl enable postgresql
```
Then creating a database:
```console
user@host:~$ sudo -i -u postgres createuser --interactive
Enter name of role to add: szuru
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n
user@host:~$ sudo -i -u postgres createdb szuru
user@host:~$ sudo -i -u postgres psql -c "ALTER USER szuru PASSWORD 'dog';"
```
### Setting up elasticsearch
```console
user@host:~$ sudo systemctl start elasticsearch
user@host:~$ sudo systemctl enable elasticsearch
```
### Preparing environment
Getting `szurubooru`:
```console
user@host:~$ git clone https://github.com/rr-/szurubooru.git szuru
user@host:~$ cd szuru
```
Installing frontend dependencies:
```console
user@host:szuru$ cd client
user@host:szuru/client$ npm install
```
`npm` sandboxes dependencies by default, i.e. installs them to
`./node_modules`. This is good, because it avoids polluting the system with the
project's dependencies. To make Python work the same way, we'll use
`virtualenv`. Installing backend dependencies with `virtualenv` looks like
this:
```console
user@host:szuru/client$ cd ../server
user@host:szuru/server$ virtualenv python_modules # consistent with node_modules
user@host:szuru/server$ source python_modules/bin/activate # enters the sandbox
(python_modules) user@host:szuru/server$ pip install -r requirements.txt # installs the dependencies
```
### Preparing `szurubooru` for first run
1. Compile the frontend:
1. Getting `szurubooru`:
```console
user@host:szuru$ cd client
user@host:szuru/client$ node build.js
user@host:~$ git clone https://github.com/rr-/szurubooru.git szuru
user@host:~$ cd szuru
```
You can include the flags `--no-transpile` to disable the JavaScript
transpiler, which provides compatibility with older browsers, and
`--debug` to generate JS source mappings.
2. Configure things:
2. Configure the application:
```console
user@host:szuru/client$ cd ..
user@host:szuru$ cp config.yaml.dist config.yaml
user@host:szuru$ vim config.yaml
user@host:szuru$ cp server/config.yaml.dist config.yaml
user@host:szuru$ edit config.yaml
```
Pay extra attention to these fields:
- data directory,
- data URL,
- database,
- secret
- the `smtp` section.
3. Upgrade the database:
You can omit lines when you want to use the defaults of that field.
3. Configure Docker Compose:
```console
user@host:szuru/client$ cd ../server
user@host:szuru/server$ source python_modules/bin/activate
(python_modules) user@host:szuru/server$ alembic upgrade head
user@host:szuru$ cp docker-compose.yml.example docker-compose.yml
user@host:szuru$ edit docker-compose.yml
```
`alembic` should have been installed during installation of `szurubooru`'s
dependencies.
Read the comments to guide you. For production use, it is *important*
that you configure the volumes appropriately to avoid data loss.
4. Run the tests:
### Running the Application
1. Configurations for ElasticSearch:
You may need to raise the `vm.max_map_count`
parameter to at least `262144` in order for the
ElasticSearch container to function. Instructions
on how to do so are provided
[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-cli-run-prod-mode).
2. Build or update the containers:
```console
(python_modules) user@host:szuru/server$ pytest
user@host:szuru$ docker-compose pull
user@host:szuru$ docker-compose build --pull
```
It is recommended to rebuild the frontend after each change to configuration.
This will build both the frontend and backend containers, and may take
some time.
3. Start and stop the the application
### Wiring `szurubooru` to the web server
`szurubooru` is divided into two parts: public static files, and the API. It
tries not to impose any networking configurations on the user, so it is the
user's responsibility to wire these to their web server.
The static files are located in the `client/public/data` directory and are
meant to be exposed directly to the end users.
The API should be exposed using WSGI server such as `waitress`, `gunicorn` or
similar. Other configurations might be possible but I didn't pursue them.
API calls are made to the relative URL `/api/`. Your HTTP server should be
configured to proxy this URL format to the WSGI server. Some users may prefer
to use a dedicated reverse proxy for this, to incorporate additional features
such as load balancing and SSL.
Note that the API URL in the virtual host configuration needs to be the same as
the one in the `config.yaml`, so that client knows how to access the backend!
#### Example
In this example:
- The booru is accessed from `http://example.com/`
- The API is accessed from `http://example.com/api`
- The API server listens locally on port 6666, and is proxied by nginx
- The static files are served from `/srv/www/booru/client/public/data`
**nginx configuration**:
```nginx
server {
listen 80;
server_name example.com;
location ~ ^/api$ {
return 302 /api/;
}
location ~ ^/api/(.*)$ {
if ($request_uri ~* "/api/(.*)") { # preserve PATH_INFO as-is
proxy_pass http://127.0.0.1:6666/$1;
}
}
location / {
root /srv/www/booru/client/public;
try_files $uri /index.htm;
}
}
```
**`config.yaml`**:
```yaml
data_url: 'http://example.com/data/'
data_dir: '/srv/www/booru/client/public/data'
```
To run the server using `waitress`:
```console
user@host:szuru/server$ source python_modules/bin/activate
(python_modules) user@host:szuru/server$ pip install waitress
(python_modules) user@host:szuru/server$ waitress-serve --port 6666 szurubooru.facade:app
```
or `gunicorn`:
```console
user@host:szuru/server$ source python_modules/bin/activate
(python_modules) user@host:szuru/server$ pip install gunicorn
(python_modules) user@host:szuru/server$ gunicorn szurubooru.facade:app -b 127.0.0.1:6666
```
```console
# To start:
user@host:szuru$ docker-compose up -d
# To monitor (CTRL+C to exit):
user@host:szuru$ docker-compose logs -f
# To stop
user@host:szuru$ docker-compose down
```

View file

@ -32,6 +32,7 @@ scrubbing](http://sjp.pwn.pl/sjp/;2527372). It is pronounced as *shoorubooru*.
- FFmpeg
- node.js
It is recommended that you use Docker for deployment.
[See installation instructions.](https://github.com/rr-/szurubooru/blob/master/INSTALL.md)
## Screenshots

6
client/.dockerignore Normal file
View file

@ -0,0 +1,6 @@
node_modules/*
package-lock.json
Dockerfile
.dockerignore
**/.gitignore

31
client/Dockerfile Normal file
View file

@ -0,0 +1,31 @@
FROM node:9 as builder
WORKDIR /opt/app
COPY package.json ./
RUN npm install
COPY . ./
ARG BUILD_INFO="docker-latest"
ARG CLIENT_BUILD_ARGS=""
RUN node build.js ${CLIENT_BUILD_ARGS}
RUN find public/ -type f -size +5k -print0 | xargs -0 -- gzip -6 -k
FROM nginx:alpine
WORKDIR /var/www
RUN \
# Create init file
echo "#!/bin/sh" >> /init && \
echo 'sed -i "s|__BACKEND__|${BACKEND_HOST}|" /etc/nginx/nginx.conf' \
>> /init && \
echo 'exec nginx -g "daemon off;"' >> /init && \
chmod a+x /init
CMD ["/init"]
VOLUME ["/data"]
COPY nginx.conf.docker /etc/nginx/nginx.conf
COPY --from=builder /opt/app/public/ .

56
client/nginx.conf.docker Normal file
View file

@ -0,0 +1,56 @@
worker_processes 1;
error_log /dev/stderr warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr -> $request [$status] - '
'referer: $http_referer $http_x_forwarded_for';
access_log /dev/stdout main;
sendfile on;
keepalive_timeout 65;
client_max_body_size 100M;
upstream backend {
server __BACKEND__:6666;
}
server {
listen 80 default_server;
location ~ ^/api$ {
return 302 /api/;
}
location ~ ^/api/(.*)$ {
if ($request_uri ~* "/api/(.*)") {
proxy_pass http://backend/$1;
}
gzip on;
gzip_comp_level 3;
gzip_min_length 20;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain application/json;
}
location /data/ {
rewrite ^/data/(.*) /$1 break;
root /data;
}
location / {
root /var/www;
try_files $uri /index.htm;
gzip_static on;
gzip_proxied expired no-cache no-store private auth;
}
}
}

102
docker-compose.yml.example Normal file
View file

@ -0,0 +1,102 @@
## Example Docker Compose configuration
##
## Use this as a template to set up docker-compose, or as guide to set up other
## orchestration services
version: '3.1'
services:
## Python3 container for backend API
backend:
build:
context: ./server
depends_on:
- sql
- elasticsearch
environment: # Commented Values are Default
## These should be the names of the dependent containers listed above, or
## accessible hostnames/IP addresses if these services are running
## outside of Docker
POSTGRES_HOST: sql
ESEARCH_HOST: elasticsearch
## Credentials for database
POSTGRES_USER: szuru
POSTGRES_PASSWORD: badpass
## Leave commented if using the official postgres container,
## it will default to the value in POSTGRES_USER
#POSTGRES_DB:
## Leave commented if using the default port 5432
#POSTGRES_PORT: 5432
## Leave commented if using the default port 9200
#ESEARCH_PORT: 9200
## Uncomment and change if you want to use a different index
#ESEARCH_INDEX: szurubooru
## Leave commented unless you want verbose SQL in the container logs
#LOG_SQL: 1
volumes:
- data:/data
## If more customizations that are not covered in `config.yaml.dist` are needed
## Comment this line if you are not going
## to supply a YAML file
- ./config.yaml:/opt/config.yaml
## HTTP container for frontend
frontend:
build:
context: ./client
args:
## This shows up on the homescreen, indicating build information
## Change as desired
BUILD_INFO: docker-example
depends_on:
- backend
environment:
## This should be the name of the previous container
BACKEND_HOST: backend
volumes:
- data:/data:ro
ports:
## If you want to expose the website on another port like 80,
## change to 80:80
- 8080:80
## PostgreSQL container for database
sql:
image: postgres:alpine
restart: unless-stopped
environment:
## These should equal the same credentials as used on the first container
POSTGRES_USER: szuru
POSTGRES_PASSWORD: badpass
volumes:
- database:/var/lib/postgresql/data
## ElasticSearch container for image indexing
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.3.1
environment:
## Specifies the Java heap size used
## Read
## https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
## for more info
ES_JAVA_OPTS: -Xms512m -Xmx512m
volumes:
- index:/usr/share/elasticsearch/data
volumes:
## IMPORTANT FOR PRODUCTION USE:
## To avoid data loss, you should read and understand how Docker volumes work
## before proceeding. Information can be found on
## https://docs.docker.com/storage/volumes/
## These mounts should be configured with drivers apporpriate for your system.
## For small deployments, bind mounts or using the local-persist driver should
## be fine. Make sure you mount to a directory that is safe and backed up.
## local-persist driver can be found at:
## https://github.com/CWSpear/local-persist
## It is okay to leave these as-is for development or testing purposes
data: # This volume will hold persistant Image and User data for the board
database: # This holds the SQL database
index: # Scratch space for ElasticSearch Index

8
server/.dockerignore Normal file
View file

@ -0,0 +1,8 @@
szurubooru/tests/*
setup.cfg
.pylintrc
mypi.ini
Dockerfile
.dockerignore
**/.gitignore

46
server/Dockerfile Normal file
View file

@ -0,0 +1,46 @@
FROM scratch as approot
WORKDIR /opt/app
COPY alembic.ini wait-for-es generate-thumb ./
COPY szurubooru/ ./szurubooru/
COPY config.yaml.dist ../
FROM python:3.6-slim
WORKDIR /opt/app
ARG PUID=1000
ARG PGID=1000
ARG PORT=6666
RUN \
# Set users
mkdir -p /opt/app /data && \
groupadd -g ${PGID} app && \
useradd -d /opt/app -M -c '' -g app -r -u ${PUID} app && \
chown -R app:app /opt/app /data && \
# Create init file
echo "#!/bin/sh" >> /init && \
echo "set -e" >> /init && \
echo "cd /opt/app" >> /init && \
echo "./wait-for-es" >> /init && \
echo "alembic upgrade head" >> /init && \
echo "exec waitress-serve --port ${PORT} szurubooru.facade:app" \
>> /init && \
chmod a+x /init && \
# Install ffmpeg
apt-get -yqq update && \
apt-get -yq install --no-install-recommends ffmpeg && \
rm -rf /var/lib/apt/lists/* && \
# Install waitress
pip3 install --no-cache-dir waitress
COPY --chown=app:app requirements.txt ./requirements.txt
RUN pip3 install --no-cache-dir -r ./requirements.txt
# done to minimize number of layers in final image
COPY --chown=app:app --from=approot / /
VOLUME ["/data/"]
EXPOSE ${PORT}
USER app
CMD ["/init"]

View file

@ -1,66 +1,46 @@
# rather than editing this file, it is strongly suggested to create config.yaml
# and override only what you need.
name: szurubooru # shown in the website title and on the front page
debug: 0 # generate server logs?
show_sql: 0 # show sql in server logs?
secret: change # used to salt the users' password hashes
data_url: # used to form links to posts and avatars, example: http://example.com/data/
data_dir: # absolute path for posts and avatars storage, example: /srv/www/booru/client/public/data/
user_agent: # user agent name used to download files from the web on behalf of the api users
# usage: schema://user:password@host:port/database_name
# example: postgres://szuru:dog@localhost:5432/szuru_test
# example (useful for tests): sqlite:///:memory:
database:
test_database: 'sqlite:///:memory:' # required for running the test suite
# shown in the website title and on the front page
name: szurubooru
# user agent name used to download files from the web on behalf of the api users
user_agent:
# used to salt the users' password hashes
secret: change
# required for running the test suite
test_database: 'sqlite:///:memory:'
# Delete thumbnails and source files on post delete
# Original functionality is no, to mitigate the impacts of admins going
# on unchecked post purges.
delete_source_files: no
thumbnails:
avatar_width: 300
avatar_height: 300
post_width: 300
post_height: 300
convert:
gif:
to_webm: false
to_mp4: false
# used to send password reset e-mails
smtp:
host: # example: localhost
port: # example: 25
user: # example: bot
pass: # example: groovy123
# host can be left empty, in which case it is recommended to fill contact_email.
# host can be left empty, in which case it is recommended to fill contactEmail.
contact_email: # example: bob@example.com. Meant for manual password reset procedures
# used for reverse image search
elasticsearch:
host: localhost
port: 9200
index: szurubooru
enable_safety: yes
tag_name_regex: ^\S+$
tag_category_name_regex: ^[^\s%+#/]+$
# don't make these more restrictive unless you want to annoy people; if you do
# customize them, make sure to update the instructions in the registration form
# template as well.
@ -69,7 +49,6 @@ user_name_regex: '^[a-zA-Z0-9_-]{1,32}$'
default_rank: regular
privileges:
'users:create:self': anonymous # Registration permission
'users:create:any': administrator
@ -150,3 +129,16 @@ privileges:
'snapshots:list': power
'uploads:create': regular
## ONLY SET THESE IF DEPLOYING OUTSIDE OF DOCKER
#debug: 0 # generate server logs?
#show_sql: 0 # show sql in server logs?
#data_url: /data/
#data_dir: /var/www/data
## usage: schema://user:password@host:port/database_name
## example: postgres://szuru:dog@localhost:5432/szuru_test
#database:
#elasticsearch: # used for reverse image search
# host: localhost
# port: 9200
# index: szurubooru

View file

@ -1,6 +1,7 @@
from typing import Dict
import os
import yaml
from szurubooru import errors
def merge(left: Dict, right: Dict) -> Dict:
@ -15,12 +16,43 @@ def merge(left: Dict, right: Dict) -> Dict:
return left
def docker_config() -> Dict:
for key in [
'POSTGRES_USER',
'POSTGRES_PASSWORD',
'POSTGRES_HOST',
'ESEARCH_HOST'
]:
if not os.getenv(key, False):
raise errors.ConfigError(f'Environment variable "{key}" not set')
return {
'debug': True,
'show_sql': int(os.getenv('LOG_SQL', 0)),
'data_url': os.getenv('DATA_URL', '/data/'),
'data_dir': '/data/',
'database': 'postgres://%(user)s:%(pass)s@%(host)s:%(port)d/%(db)s' % {
'user': os.getenv('POSTGRES_USER'),
'pass': os.getenv('POSTGRES_PASSWORD'),
'host': os.getenv('POSTGRES_HOST'),
'port': int(os.getenv('POSTGRES_PORT', 5432)),
'db': os.getenv('POSTGRES_DB', os.getenv('POSTGRES_USER'))
},
'elasticsearch': {
'host': os.getenv('ESEARCH_HOST'),
'port': int(os.getenv('ESEARCH_PORT', 9200)),
'index': os.getenv('ESEARCH_INDEX', 'szurubooru')
}
}
def read_config() -> Dict:
with open('../config.yaml.dist') as handle:
ret = yaml.load(handle.read())
if os.path.exists('../config.yaml'):
with open('../config.yaml') as handle:
ret = merge(ret, yaml.load(handle.read()))
if os.path.exists('/.dockerenv'):
ret = merge(ret, docker_config())
return ret

33
server/wait-for-es Executable file
View file

@ -0,0 +1,33 @@
#!/usr/bin/env python3
'''
Docker helper script. Blocks until the ElasticSearch service is ready.
'''
import logging
import time
import elasticsearch
from szurubooru import config, errors
def main():
print('Looking for ElasticSearch connection...')
logging.basicConfig(level=logging.ERROR)
es = elasticsearch.Elasticsearch([{
'host': config.config['elasticsearch']['host'],
'port': config.config['elasticsearch']['port'],
}])
TIMEOUT = 30
DELAY = 0.1
for _ in range(int(TIMEOUT / DELAY)):
try:
es.cluster.health(wait_for_status='yellow')
print('Connected to ElasticSearch!')
return
except Exception:
time.sleep(DELAY)
pass
raise errors.ThirdPartyError('Error connecting to ElasticSearch')
if __name__ == '__main__':
main()