
When you publish your app on the App Store and want it to be visible to your target audience, you need to use the right keywords. These keywords are used by the indexing algorithm to determine your app's ranking in search results. This process of enhancing visibility is known as App Store Optimization (ASO).
There are many ASO tools available on the market, but they come with a few significant drawbacks:
In this article, we will create a simple tool that meets your requirements with minimal effort.
Let's start by outlining the requirements for this tool:
We'll be using a microservice architecture. Without diving into its pros and cons, this approach allows us to make each module independent. This means we can develop, test, or fix one module without affecting the others.

click: For argument handling.psycopg2: For working with PostgreSQL.flask: For handling requests in the scheduler.volkovlabs-form-panel: To create a form for scheduling tasks, eliminating the need for a separate front-end.To deploy the containerized network of microservices, we'll use Docker Compose. It allows us to describe, link, and deploy multiple containers simultaneously.
All custom names, credentials, and secrets are stored in a .env file. This makes configuration more flexible and allows for reuse in additional scripts.
To facilitate the use of these additional scripts, let's wrap docker compose in a bash script file deploy.sh. To deploy your tool, simply run this file:
source .env
docker compose up -d
Now, let's write a basic compose.yaml file:
version: '0.3'
services:
aso_postgres:
# ...
networks:
- aso_network
aso_pgadmin:
# ...
networks:
- aso_network
aso_grafana:
# ...
networks:
- aso_network
aso_scrapper:
# ...
networks:
- aso_network
aso_control_back:
# ...
networks:
- aso_network
networks:
aso_network:
driver: bridge
volumes:
aso_postgres_data:
driver: local
aso_grafana_data:
driver: local
aso_pgadmin_data:
driver: local
We've defined our five services, all connected to the aso_network Docker network. Additionally, we've pre-defined volumes for some containers to preserve data during updates.
We'll use postgres image. Define its the environment variables in the .env file:
DB_NAME="aso"
DB_USER="marvin"
DB_PASSWORD="marviniscool"
DB_HOST="aso_postgres_container"
Use these variables in the container definition in compose.yaml:
aso_postgres:
container_name: aso_postgres_container
image: postgres:latest
restart: unless-stopped
healthcheck:
test: [ "CMD-SHELL", "pg_isready -d aso -U marvin" ]
hostname: aso-postgres
ports:
- 5432:5432
networks:
- aso_network
volumes:
- aso_postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: "${DB_NAME}"
POSTGRES_USER: "${DB_USER}"
POSTGRES_PASSWORD: "${DB_PASSWORD}"
POSTGRES_HOST: "${DB_HOST}"
PGDATA: /var/lib/postgresql/data
The healthcheck will be useful later to ensure other containers launch only when the database is successfully loaded.
Here, we'll use the 'dpage/pgadmin4' image. Like postgres, it requires some environment variable setup (+reusing some of the previously defined variables):
DB_PORT=5432
DBADM_USER="marvin@domain.com"
DBADM_PASSWORD="marviniscool"
Service definition:
aso_pgadmin:
container_name: aso_pgadmin_container
image: dpage/pgadmin4:latest
depends_on:
aso_postgres:
condition: service_healthy
restart: unless-stopped
ports:
- 8888:80
networks:
- aso_network
volumes:
- aso_pgadmin_data:/var/lib/pgadmin
- ./pgadmin/servers.json:/pgadmin4/servers.json
- ./pgadmin/pgpass:/tmp/pgpass
environment:
PGADMIN_DEFAULT_EMAIL: "${DBADM_USER}"
PGADMIN_DEFAULT_PASSWORD: "${DBADM_PASSWORD}"
entrypoint: >
/bin/sh -c "
chmod 600 /tmp/pgpass;
mkdir -p /var/lib/pgadmin/storage/marvin_domain.com;
cp /tmp/pgpass /var/lib/pgadmin/storage/marvin_domain.com/.;
/entrypoint.sh;
"
Additional entries in volumes and their usage in entrypoint allow us to set up a connection to our DB at deployment time. We need to create/generate a couple of files (servers.json and pgpass) which are copied during the entrypoint step due to permission constraints.
Generate these files in deploy.sh using the environment variables before running the docker compose command:
source .env
# Prepare PGAdmin
echo "{\"Servers\":{\"1\":{\"Name\":\"${DB_NAME}\",\"Group\":\"Servers\",\"Host\":\"${DB_HOST}\",\"Port\":${DB_PORT},\"MaintenanceDB\":\"${DB_NAME}\",\"Username\":\"${DB_USER}\",\"ConnectionParameters\":{\"sslmode\":\"prefer\",\"connect_timeout\":10,\"passfile\":\"/pgpass\"}}}}" > ./pgadmin/servers.json
echo "${DB_HOST}:${DB_PORT}:*:${DB_USER}:${DB_PASSWORD}" > ./pgadmin/pgpass
docker compose up -d
Now, we can deploy the composition and verify the connection to our database from PgAdmin:

Now we need to set up a schema for our database. While we could build it manually using PgAdmin, the goal is to create a "plug-n-play" tool. First, let's design the schema:

The idea is to keep the data simple and lightweight, storing it indefinitely, and just activate/deactivate entries as needed.
To deploy the schema with the container, we need to include this file:
psql_dump.sql-- Create Tables
CREATE TABLE IF NOT EXISTS apps (
id INTEGER PRIMARY KEY NOT NULL,
name TEXT NOT NULL,
is_archived BOOLEAN NOT NULL
);
CREATE TABLE IF NOT EXISTS countries (
id INTEGER PRIMARY KEY NOT NULL,
code TEXT NOT NULL,
name TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS storefronts (
id INTEGER PRIMARY KEY NOT NULL,
name TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS words (
id SERIAL PRIMARY KEY NOT NULL,
value TEXT NOT NULL,
is_archived BOOLEAN NOT NULL
);
CREATE TABLE IF NOT EXISTS tasks (
id SERIAL,
app_id INTEGER NOT NULL,
country_id INTEGER NOT NULL,
store_id INTEGER NOT NULL,
word_id INTEGER NOT NULL,
is_active BOOLEAN NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (app_id) REFERENCES apps (id),
FOREIGN KEY (country_id) REFERENCES countries (id),
FOREIGN KEY (store_id) REFERENCES storefronts (id),
FOREIGN KEY (word_id) REFERENCES words (id)
);
CREATE TABLE IF NOT EXISTS ranks (
id SERIAL,
task_id INTEGER NOT NULL,
time TIMESTAMP NOT NULL,
rank INTEGER NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (task_id) REFERENCES tasks (id)
);
-- Initial standard data
INSERT INTO storefronts (id, name)
SELECT
(datajson->'id')::text::int,
trim('"' FROM (datajson->'name')::text)
FROM jsonb_array_elements('[{"id": 20, "name": "iPad"}, {"id": 21, "name": "iPhone"}, {"id": 23, "name": "K71"}, {"id": 24, "name": "P71"}, {"id": 25, "name": "K8"}, {"id": 26, "name": "P8"}, {"id": 29, "name": "P84"}, {"id": 30, "name": "K84"}, {"id": 31, "name": "Android"}, {"id": 35, "name": "Watch"}, {"id": 38, "name": "MacPodcasts1"}]'::jsonb) AS t(datajson);
INSERT INTO countries (id, code, name)
SELECT
(datajson->'id')::text::int,
trim('"' FROM (datajson->'code')::text),
trim('"' FROM (datajson->'name')::text)
FROM jsonb_array_elements('[{"id": 143563, "code": "DZ", "name": "Algeria"}, {"id": 143564, "code": "AO", "name": "Angola"}, {"id": 143538, "code": "AI", "name": "Anguilla"}, {"id": 143505, "code": "AR", "name": "Argentina"}, {"id": 143524, "code": "AM", "name": "Armenia"}, {"id": 143460, "code": "AU", "name": "Australia"}, {"id": 143445, "code": "AT", "name": "Austria"}, {"id": 143568, "code": "AZ", "name": "Azerbaijan"}, {"id": 143559, "code": "BH", "name": "Bahrain"}, {"id": 143541, "code": "BB", "name": "Barbados"}, {"id": 143565, "code": "BY", "name": "Belarus"}, {"id": 143446, "code": "BE", "name": "Belgium"}, {"id": 143555, "code": "BZ", "name": "Belize"}, {"id": 143542, "code": "BM", "name": "Bermuda"}, {"id": 143556, "code": "BO", "name": "Bolivia (Plurinational State of)"}, {"id": 143525, "code": "BW", "name": "Botswana"}, {"id": 143503, "code": "BR", "name": "Brazil"}, {"id": 143543, "code": "VG", "name": "Virgin Islands (British)"}, {"id": 143560, "code": "BN", "name": "Brunei Darussalam"}, {"id": 143526, "code": "BG", "name": "Bulgaria"}, {"id": 143455, "code": "CA", "name": "Canada"}, {"id": 143544, "code": "KY", "name": "Cayman Islands (the)"}, {"id": 143483, "code": "CL", "name": "Chile"}, {"id": 143465, "code": "CN", "name": "China"}, {"id": 143501, "code": "CO", "name": "Colombia"}, {"id": 143495, "code": "CR", "name": "Costa Rica"}, {"id": 143494, "code": "HR", "name": "Croatia"}, {"id": 143557, "code": "CY", "name": "Cyprus"}, {"id": 143489, "code": "CZ", "name": "Czechia"}, {"id": 143458, "code": "DK", "name": "Denmark"}, {"id": 143545, "code": "DM", "name": "Dominica"}, {"id": 143509, "code": "EC", "name": "Ecuador"}, {"id": 143516, "code": "EG", "name": "Egypt"}, {"id": 143506, "code": "SV", "name": "El Salvador"}, {"id": 143518, "code": "EE", "name": "Estonia"}, {"id": 143447, "code": "FI", "name": "Finland"}, {"id": 143442, "code": "FR", "name": "France"}, {"id": 143443, "code": "DE", "name": "Germany"}, {"id": 143444, "code": "GB", "name": "United Kingdom of Great Britain and Northern Ireland (the)"}, {"id": 143573, "code": "GH", "name": "Ghana"}, {"id": 143448, "code": "GR", "name": "Greece"}, {"id": 143546, "code": "GD", "name": "Grenada"}, {"id": 143504, "code": "GT", "name": "Guatemala"}, {"id": 143553, "code": "GY", "name": "Guyana"}, {"id": 143510, "code": "HN", "name": "Honduras"}, {"id": 143463, "code": "HK", "name": "Hong Kong"}, {"id": 143482, "code": "HU", "name": "Hungary"}, {"id": 143558, "code": "IS", "name": "Iceland"}, {"id": 143467, "code": "IN", "name": "India"}, {"id": 143476, "code": "ID", "name": "Indonesia"}, {"id": 143449, "code": "IE", "name": "Ireland"}, {"id": 143491, "code": "IL", "name": "Israel"}, {"id": 143450, "code": "IT", "name": "Italy"}, {"id": 143511, "code": "JM", "name": "Jamaica"}, {"id": 143462, "code": "JP", "name": "Japan"}, {"id": 143528, "code": "JO", "name": "Jordan"}, {"id": 143529, "code": "KE", "name": "Kenya"}, {"id": 143493, "code": "KW", "name": "Kuwait"}, {"id": 143519, "code": "LV", "name": "Latvia"}, {"id": 143497, "code": "LB", "name": "Lebanon"}, {"id": 143520, "code": "LT", "name": "Lithuania"}, {"id": 143451, "code": "LU", "name": "Luxembourg"}, {"id": 143515, "code": "MO", "name": "Macao"}, {"id": 143530, "code": "MK", "name": "Republic of North Macedonia"}, {"id": 143531, "code": "MG", "name": "Madagascar"}, {"id": 143473, "code": "MY", "name": "Malaysia"}, {"id": 143532, "code": "ML", "name": "Mali"}, {"id": 143521, "code": "MT", "name": "Malta"}, {"id": 143533, "code": "MU", "name": "Mauritius"}, {"id": 143468, "code": "MX", "name": "Mexico"}, {"id": 143547, "code": "MS", "name": "Montserrat"}, {"id": 143484, "code": "NP", "name": "Nepal"}, {"id": 143452, "code": "NL", "name": "Netherlands (the)"}, {"id": 143461, "code": "NZ", "name": "New Zealand"}, {"id": 143512, "code": "NI", "name": "Nicaragua"}, {"id": 143534, "code": "NE", "name": "Niger (the)"}, {"id": 143561, "code": "NG", "name": "Nigeria"}, {"id": 143457, "code": "NO", "name": "Norway"}, {"id": 143562, "code": "OM", "name": "Oman"}, {"id": 143477, "code": "PK", "name": "Pakistan"}, {"id": 143485, "code": "PA", "name": "Panama"}, {"id": 143513, "code": "PY", "name": "Paraguay"}, {"id": 143507, "code": "PE", "name": "Peru"}, {"id": 143474, "code": "PH", "name": "Philippines (the)"}, {"id": 143478, "code": "PL", "name": "Poland"}, {"id": 143453, "code": "PT", "name": "Portugal"}, {"id": 143498, "code": "QA", "name": "Qatar"}, {"id": 143487, "code": "RO", "name": "Romania"}, {"id": 143469, "code": "RU", "name": "Russian Federation (the)"}, {"id": 143479, "code": "SA", "name": "Saudi Arabia"}, {"id": 143535, "code": "SN", "name": "Senegal"}, {"id": 143464, "code": "SG", "name": "Singapore"}, {"id": 143496, "code": "SK", "name": "Slovakia"}, {"id": 143499, "code": "SI", "name": "Slovenia"}, {"id": 143472, "code": "ZA", "name": "South Africa"}, {"id": 143454, "code": "ES", "name": "Spain"}, {"id": 143486, "code": "LK", "name": "Sri Lanka"}, {"id": 143554, "code": "SR", "name": "Suriname"}, {"id": 143456, "code": "SE", "name": "Sweden"}, {"id": 143459, "code": "CH", "name": "Switzerland"}, {"id": 143470, "code": "TW", "name": "Taiwan (Province of China)"}, {"id": 143572, "code": "TZ", "name": "Tanzania, United Republic of"}, {"id": 143475, "code": "TH", "name": "Thailand"}, {"id": 143536, "code": "TN", "name": "Tunisia"}, {"id": 143480, "code": "TR", "name": "Turkey"}, {"id": 143537, "code": "UG", "name": "Uganda"}, {"id": 143492, "code": "UA", "name": "Ukraine"}, {"id": 143481, "code": "AE", "name": "United Arab Emirates (the)"}, {"id": 143441, "code": "US", "name": "United States of America (the)"}, {"id": 143514, "code": "UY", "name": "Uruguay"}, {"id": 143566, "code": "UZ", "name": "Uzbekistan"}, {"id": 143502, "code": "VE", "name": "Venezuela (Bolivarian Republic of)"}, {"id": 143471, "code": "VN", "name": "Viet Nam"}, {"id": 143571, "code": "YE", "name": "Yemen"}]'::jsonb) AS t(datajson);
⌃
To put the SQL file into the container, we can mount it in the volume section of the compose.yaml:
# aso_postgres
# ...
volumes:
- aso_postgres_data:/var/lib/postgresql/data
- ./postgres/psql_dump.sql:/docker-entrypoint-initdb.d/psql_dump.sql
This configuration mounts the psql_dump.sql file into the docker-entrypoint-initdb.d directory, ensuring that our schema is set up automatically when the container is deployed.
You can now check the resulting schema in PgAdmin.
Now that we've set up our database, we can start populating it with data. To do this, let's deploy a Python container:
aso_scrapper:
container_name: aso_scrapper_container
image: python:3.12
depends_on:
aso_postgres:
condition: service_healthy
restart: unless-stopped
ports:
- 3100:3100
networks:
- aso_network
volumes:
- ./scrapper:/scrapper
working_dir:
/scrapper
entrypoint: >
/bin/sh -c "
pip3 install -r requirements.txt;
python3 -u -m asoscrapper --database=${DB_NAME} --host=${DB_HOST} --user=${DB_USER} --password=${DB_PASSWORD} --port=${DB_PORT} --delay=${SCRAP_DELAY};
"
Now, we need to write a couple of pieces of Python code. First, I'll explain the essential code, followed by the utility code. Before we start, ensure you have a directory structure like this:
└── scrapper
├── asoscrapper
│ ├── __main__.py
│ ├── console.py
│ ├── database.py
│ ├── model.py
│ └── query.py
└── requirements.txt
__main__.pyimport time
import click
from datetime import datetime
from .query import *
from .database import *
from .console import *
from .model import *
def _iteration(creds, delay):
tasks = None
with ASODatabaseConnection(creds) as base:
tasks = base.get_tasks_list()
if tasks is None or len(tasks) == 0:
print_info('Nothing to scrap ¯\\_(ツ)_/¯')
time.sleep(delay * 10)
return
for task in tasks:
time.sleep(delay)
try:
result = get_app_ids_for_query(task)
except Exception as e:
print_query(task.appName, task.storeName, task.countryName, task.query, e)
continue
try:
rank = result.index(str(task.app))
except:
rank = -1
with ASODatabaseConnection(creds) as base:
print_query(task.appName, task.storeName, task.countryName, task.query, rank)
base.store_rank(Rank(
task.id,
datetime.today().strftime('%Y-%m-%d %H:%M:%S'),
rank)
)
@click.command()
@click.option('--database', type=str, help='A name of your database')
@click.option('--host', type=str, help='A host name where the database is deployed')
@click.option('--user', type=str, help='A user name for the database')
@click.option('--password', type=str, help='A password of the user')
@click.option('--port', type=int, help='A port on the host')
@click.option('--delay', type=int, help='A delay between requests in seconds')
def scrap(database, host, user, password, port, delay):
creds = DatabaseCreds(
database=database,
host=host,
user=user,
password=password,
port=port
)
print_info('Start scrapping!')
while True:
_iteration(creds, delay)
time.sleep(delay)
if __name__ == '__main__':
scrap()
⌃
Here, we continuously read all active tasks from our database, process them, and write the results back. To determine the rank of our app, we simply search for its index in the search response list.
NOTE: We must introduce delays between requests to the store to avoid being temporarily blocked.
query.pyimport requests
import json
from dataclasses import dataclass
from urllib.parse import quote_plus
from .model import *
@dataclass
class QueryParams:
country: int
storefront: int
query: str
def get_app_ids_for_query(task: Task):
url = "<https://search.itunes.apple.com/WebObjects/MZStore.woa/wa/search?clientApplication=Software&media=software&term=>"
url += quote_plus(task.query)
headers = {
"X-Apple-Store-Front": f"{task.country},{task.store} t:native",
"Accept-Language": "en",
'User-Agent': 'Mozilla/5.0'
}
try:
reponse = requests.get(url, headers=headers)
except ConnectionError as ce:
raise AppStoreException("Cannot connect to store: {0}".format(str(ce)))
try:
result = reponse.json()
except json.JSONDecodeError:
raise AppStoreException(f"Could not parse app store response: {reponse}")
return [app["id"] for app in result["bubbles"][0]["results"]]
⌃
It's quite simple: we build a list of app identifiers from the response of a search request to the App Store.
database.pyimport psycopg2
from .console import *
from .model import *
class ASODatabaseConnection:
def __init__(self, creds: DatabaseCreds):
self.databaseName = creds.database
self.host = creds.host
self.user = creds.user
self.password = creds.password
self.port = creds.port
def __enter__(self):
self.connection = psycopg2.connect(
database=self.databaseName,
host=self.host,
user=self.user,
password=self.password,
port=self.port
)
return self
def __exit__(self, exc_type, exc_value, traceback):
self.connection.commit()
self.connection.close()
def get_tasks_list(self):
tasks = []
cursor = self.connection.cursor()
cursor.execute("SELECT * FROM tasks WHERE is_active = TRUE")
taskRecords = []
try:
taskRecords = cursor.fetchall()
except:
print_error(f'Not able to pick any tasks from database')
return []
for row in taskRecords:
cursor.execute(f"SELECT value FROM words WHERE id = %s", (row[4],))
task = None
try:
query = cursor.fetchone()
task = Task(
id=row[0],
app=row[1],
country=row[2],
store=row[3],
query=query[0]
)
except:
print_error(f'Not able to pick a query by id={row[4]}')
continue
cursor.execute(f"SELECT name FROM apps WHERE id = %s", (task.app,))
try:
query = cursor.fetchone()
task.appName = query[0]
except:
print_error(f'Not able to pick the app={task.app}')
cursor.execute(f"SELECT name FROM storefronts WHERE id = %s", (task.store,))
try:
query = cursor.fetchone()
task.storeName = query[0]
except:
print_error(f'Not able to pick the store={task.store}')
cursor.execute(f"SELECT name FROM countries WHERE id = %s", (task.country,))
try:
query = cursor.fetchone()
task.countryName = query[0]
except:
print_error(f'Not able to pick the store={task.store}')
tasks.append(task)
return tasks
def store_rank(self, rank: Rank):
cursor = self.connection.cursor()
cursor.execute(f'INSERT INTO ranks (task_id, time, rank) VALUES (%s, %s, %s)',
(rank.task, rank.time, rank.value))
⌃
The logic is pretty straightforward. A key point here is the use of __enter__ and __exit__ to open and close connections, committing transactions when working with the database using with.
console.pyfrom .model import *
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
def print_query(appName, storefront, country, query, rank):
print(f'{BOLD}{appName:<16}{END}: {YELLOW}{str(query):<24}{END} {storefront:<6} ({country:<16}) -> {BLUE}{rank}{END}')
def print_error(message):
print(f'{RED}{BOLD}ERROR:{END} {RED}{message}{END}')
def print_info(message):
print(f'{BLUE}{BOLD}INFO:{END} {BLUE}{message}{END}')
⌃
model.pyfrom dataclasses import dataclass
@dataclass
class DatabaseCreds:
database: str
host: str
user: str
password: str
port: str
@dataclass
class Task:
id: int
app: int
country: int
store: int
query: str
appName: str = ""
countryName: str = ""
storeName: str = ""
@dataclass
class Rank:
task: int
time: str
value: int
⌃
Now we can deploy our composition, manually add some words, apps, and tasks using PgAdmin (we'll automate this later), and check if the ranks table is being populated by our scraper script.
As we already have some data in our database, we can finally start building some visualizations. For this purpose, we'll deploy the grafana/grafana-oss image. To avoid a manual setup of the data source, let's automate it (since it contains some secrets, we'll generate it on the fly). First, add the following to deploy.sh:
source .env
# Prepare Grafana
echo "apiVersion: 1\n\ndeleteDatasources:\n - name: aso-db\n orgId: 1\n\ndatasources:\n - name: aso-db\n id: 1\n uid: bdm1onlbl3f28f\n orgId: 1\n type: grafana-postgresql-datasource\n typeName: PostgreSQL\n typeLogoUrl: public/app/plugins/datasource/grafana-postgresql-datasource/img/postgresql_logo.svg\n access: proxy\n url: ${DB_HOST}:${DB_PORT}\n user: ${DB_USER}\n secureJsonData:\n password: '${DB_PASSWORD}'\n database: ${DB_NAME}\n basicAuth: false\n isDefault: true\n jsonData:\n connMaxLifetime: 14400\n database: ${DB_NAME}\n maxIdleConns: 100\n maxIdleConnsAuto: true\n maxOpenConns: 100\n sslmode: disable\n readOnly: false\n " > ./grafana/provisioning/datasources/automatic.yml
# Prepare PGAdmin
echo "{\"Servers\":{\"1\":{\"Name\":\"${DB_NAME}\",\"Group\":\"Servers\",\"Host\":\"${DB_HOST}\",\"Port\":${DB_PORT},\"MaintenanceDB\":\"${DB_NAME}\",\"Username\":\"${DB_USER}\",\"ConnectionParameters\":{\"sslmode\":\"prefer\",\"connect_timeout\":10,\"passfile\":\"/pgpass\"}}}}" > ./pgadmin/servers.json
echo "${DB_HOST}:${DB_PORT}:*:${DB_USER}:${DB_PASSWORD}" > ./pgadmin/pgpass
docker compose up -d
And add the service to compose.yaml:
aso_grafana:
container_name: aso_grafana_container
image: grafana/grafana-oss
depends_on:
aso_postgres:
condition: service_healthy
restart: unless-stopped
ports:
- 3000:3000
networks:
- aso_network
volumes:
- aso_grafana_data:/var/lib/grafana/data
- ./grafana/provisioning:/etc/grafana/provisioning
- ./grafana/dashboards:/var/lib/grafana/dashboards
Although we have set up automatic deployment of dashboards, the easiest way to do this is to manually build the dashboard in Grafana, then export it as a json file and place it in ./grafana/dashboards. Also don't forget to put into grafana/provisioning/dashboards the following main.yaml:
apiVersion: 1
providers:
- name: "ASO Tool"
orgId: 1
type: file
disableDeletion: false
updateIntervalSeconds: 30
allowUiUpdates: true
options:
path: /var/lib/grafana/dashboards
foldersFromFilesStructure: true
Let's set up one dashboard panel as an example. In the query editor, enter the following code:
select
ranks.time as timestamp,
words.value as query,
CASE WHEN ranks.rank < 0 THEN NULL ELSE ranks.rank END
from
ranks
inner join tasks on ranks.task_id = tasks.id
inner join words on words.id = tasks.word_id
where tasks.store_id = 20 and tasks.is_active = true and tasks.country_id = 143441 and tasks.app_id = 1580189385;

The result might not look great initially because it includes all watched keywords. Let's refine it using Grafana's data transformer:

You also can use the dashboard's snippet:
example.json{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 4,
"links": [],
"panels": [
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 9,
"panels": [],
"title": "US",
"type": "row"
},
{
"datasource": {
"type": "grafana-postgresql-datasource",
"uid": "bdm1onlbl3f28f"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 18,
"w": 24,
"x": 0,
"y": 1
},
"id": 1,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "grafana-postgresql-datasource",
"uid": "bdm1onlbl3f28f"
},
"editorMode": "code",
"format": "table",
"rawQuery": true,
"rawSql": "select\n ranks.time as timestamp,\n words.value as query,\n CASE WHEN ranks.rank < 0 THEN NULL ELSE ranks.rank END\nfrom \n ranks\ninner join tasks on ranks.task_id = tasks.id\ninner join words on words.id = tasks.word_id\nwhere tasks.store_id = 20 and tasks.is_active = true and tasks.country_id = 143441 and tasks.app_id = 1580189385;",
"refId": "A",
"sql": {
"columns": [
{
"parameters": [],
"type": "function"
}
],
"groupBy": [
{
"property": {
"type": "string"
},
"type": "groupBy"
}
],
"limit": 50
}
}
],
"title": "US iPad",
"transformations": [
{
"id": "partitionByValues",
"options": {
"fields": [
"query"
],
"keepFields": false
}
}
],
"type": "timeseries"
}
],
"schemaVersion": 39,
"tags": [],
"templating": {
"list": []
},
"time": {
"from": "now-30d",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "Tiamat",
"uid": "edm1pxqxe1b0gc",
"version": 19,
"weekStart": ""
}
⌃
Manually adding new tasks and words isn't very convenient, so let's add a simple tool to streamline this process. This tool will consist of backend and frontend modules. Since we've decided to use a Grafana plugin for the frontend, let's focus on the backend first.
We'll create another Python script that uses flask. Here's its deployment:
aso_control_back:
container_name: aso_control_back_container
image: python:3.12
depends_on:
aso_postgres:
condition: service_healthy
restart: unless-stopped
ports:
- 3200:3200
networks:
- aso_network
environment:
- PIP_ROOT_USER_ACTION=ignore
volumes:
- ./control:/control
working_dir:
/control
entrypoint: >
/bin/sh -c "
pip3 install -r requirements.txt;
python3 -u -m asocontrol --database=${DB_NAME} --host=${DB_HOST} --user=${DB_USER} --password=${DB_PASSWORD} --port=${DB_PORT};
"
Let's organize the module's directories:
├── control
│ ├── asocontrol
│ │ ├── __main__.py
│ │ ├── console.py
│ │ └── database.py
│ └── requirements.txt
console.py is the same as in scrapper, so we only need to reimplement database.py for new tasks and create a flask-based server to handle requests.
database.pyfrom dataclasses import dataclass
import psycopg2
from .console import *
@dataclass
class DatabaseCreds:
database: str
host: str
user: str
password: str
port: str
class ASODatabaseConnection:
def **init**(self, creds: DatabaseCreds):
self.databaseName = creds.database
self.host = creds.host
self.user = creds.user
self.password = creds.password
self.port = creds.port
def __enter__(self):
self.connection = psycopg2.connect(
database=self.databaseName,
host=self.host,
user=self.user,
password=self.password,
port=self.port
)
return self
def __exit__(self, exc_type, exc_value, traceback):
self.connection.commit()
self.connection.close()
def create_tasks(self, apps, stores, countries, words):
word_ids = self._get_word_ids(words)
words_in_db = [word for _, word in word_ids]
words_to_add = [word for word in words if word not in words_in_db]
self._add_words(words_to_add)
word_ids = self._get_word_ids(words)
for app in apps:
for store in stores:
for country in countries:
for word in word_ids:
self._add_task(app, store, country, word[0])
def delete_tasks(self, tasks):
cursor = self.connection.cursor()
cursor.execute(f"UPDATE tasks SET is_active=False WHERE id=ANY(%s)", (tasks,))
def _get_word_ids(self, words):
cursor = self.connection.cursor()
cursor.execute(f"SELECT id, value FROM words WHERE value=ANY(%s)", (words,))
word_ids = []
try:
word_ids = cursor.fetchall()
except:
print_error(f'Not able to pick any tasks from database')
return []
return word_ids
def _add_words(self, words):
cursor = self.connection.cursor()
insert_query = '''
INSERT INTO words (value, is_archived)
VALUES (%s, %s)
'''
for word in words:
cursor.execute(insert_query, (word, True))
def _add_task(self, app, store, country, word):
cursor = self.connection.cursor()
cursor.execute(f"SELECT id FROM tasks WHERE app_id=%s AND country_id=%s AND store_id=%s AND word_id=%s", (app, country, store, word))
ids = []
try:
ids = cursor.fetchall()
except:
print_error(f'Not able to pick any tasks from database')
if len(ids) > 0:
cursor.execute(f"UPDATE tasks SET is_active=True WHERE id=ANY(%s)", (ids,))
else:
insert_query = '''
INSERT INTO tasks (app_id, country_id, store_id, word_id, is_active)
VALUES (%s, %s, %s, %s, %s)
'''
cursor.execute(insert_query, (app, country, store, word, True))
⌃
You can see that we don't remove tasks but simply deactivate them. Additionally, if a similar task already exists, we just reactivate it. The same approach applies to words. This reduces redundancy and allows for reuse of entities.
__main__.pyimport click
from flask import Flask, jsonify, request
from flask_cors import CORS, cross_origin
from .database import *
app = Flask(__name__)
cors = CORS(app)
app.config['CORS_HEADERS'] = 'Content-Type'
dbCreds = None
@app.route('/')
@cross_origin(origin='*',headers=['Content-Type','Authorization'])
def home():
return jsonify({
"testlist": [
"hello",
"world"
]
})
@app.route('/create_tasks', methods=['POST'])
@cross_origin(origin='*',headers=['Content-Type','Authorization'])
def create_tasks():
apps = request.json['apps_list']
stores = request.json['stores_list']
countries = request.json['countries_list']
words = request.json['words'].split(',')
with ASODatabaseConnection(dbCreds) as base:
base.create_tasks(apps, stores, countries, words)
return "Created task!"
@app.route('/remove_tasks', methods=['POST'])
@cross_origin(origin='*',headers=['Content-Type','Authorization'])
def remove_tasks():
with ASODatabaseConnection(dbCreds) as base:
base.delete_tasks(request.get_json())
return "Removed selected tasks!"
@click.command()
@click.option('--database', type=str, help='A name of your database')
@click.option('--host', type=str, help='A host name where the database is deployed')
@click.option('--user', type=str, help='A user name for the database')
@click.option('--password', type=str, help='A password of the user')
@click.option('--port', type=int, help='A port on the host')
def start_control(database, host, user, password, port):
global dbCreds
dbCreds = DatabaseCreds(
database=database,
host=host,
user=user,
password=password,
port=port
)
app.run(debug=True, host='aso_control_back_container', port=3200)
if __name__ == '__main__':
start_control()
⌃
As mentioned before, we'll use the volkovlabs-form-panel plugin to implement the task scheduling form.
To install it, simply add the following environmental variable to compose.yaml:
# aso_grafana:
environment:
- GF_INSTALL_PLUGINS=volkovlabs-form-panel
In Grafana, add a panel of type Data manipulation. Create several queries to retrieve values for each form element. In our case, we need the id and name from countries, storefronts, and apps. In the UI, you can select table_name:id for the Value Field and table_name:name for the Label Field for the corresponding form elements.
The results of user selections are stored in the plugin's variable elements with the IDs you assigned to your form elements. If you prefer not to send all values to the backend, you can customize the payload in Update Request Payload. The request itself is configured in Update Request.

Use the dashboard's snippet:
control.json{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 1,
"links": [],
"panels": [
{
"datasource": {
"type": "grafana-postgresql-datasource",
"uid": "bdm1onlbl3f28f"
},
"gridPos": {
"h": 14,
"w": 24,
"x": 0,
"y": 0
},
"id": 1,
"options": {
"buttonGroup": {
"orientation": "center",
"size": "md"
},
"confirmModal": {
"body": "Please confirm to update changed values",
"cancel": "Cancel",
"columns": {
"include": [
"name",
"oldValue",
"newValue"
],
"name": "Label",
"newValue": "New Value",
"oldValue": "Old Value"
},
"confirm": "Confirm",
"title": "Confirm update request"
},
"elementValueChanged": "",
"elements": [
{
"id": "apps_list",
"labelWidth": 10,
"options": [],
"optionsSource": "Query",
"queryOptions": {
"label": "app_name",
"source": "APPS",
"value": "app_id"
},
"section": "target",
"title": "Apps",
"tooltip": "",
"type": "multiselect",
"uid": "d525baba-e230-4511-8f17-af9f2994e21b",
"unit": "",
"value": ""
},
{
"id": "stores_list",
"labelWidth": 10,
"options": [],
"optionsSource": "Query",
"queryOptions": {
"label": "store_name",
"source": "STORES",
"value": "store_id"
},
"section": "target",
"title": "Stores",
"tooltip": "",
"type": "multiselect",
"uid": "921c23b2-4cce-4a4a-9329-cad2f6c9a1f3",
"unit": "",
"value": ""
},
{
"id": "countries_list",
"labelWidth": 10,
"options": [],
"optionsSource": "Query",
"queryOptions": {
"label": "country_name",
"source": "COUNTRIES",
"value": "country_id"
},
"section": "target",
"title": "Countries",
"tooltip": "",
"type": "multiselect",
"uid": "36717552-bf8d-4049-a86d-c71695f2b6ae",
"unit": "",
"value": ""
},
{
"id": "words",
"labelWidth": 10,
"rows": 10,
"section": "words",
"title": "Words",
"tooltip": "",
"type": "textarea",
"uid": "60975405-6e56-4202-bc50-2557c43adcfb",
"unit": "",
"value": ""
}
],
"initial": {
"code": "elements.find(x => x.id === \"apps_list\").value = [];\nelements.find(x => x.id === \"stores_list\").value = [];\nelements.find(x => x.id === \"countries_list\").value = [];\nelements.find(x => x.id === \"words\").value = \"\";",
"contentType": "application/json",
"datasource": "ControlAPI",
"getPayload": "return {\n rawSql: '',\n format: 'table',\n}",
"highlight": false,
"highlightColor": "red",
"method": "-",
"url": "http://localhost:3200/test"
},
"layout": {
"orientation": "vertical",
"padding": 10,
"sections": [
{
"id": "target",
"name": ""
},
{
"id": "words",
"name": ""
}
],
"variant": "split"
},
"reset": {
"backgroundColor": "purple",
"foregroundColor": "yellow",
"icon": "process",
"text": "Reset",
"variant": "destructive"
},
"resetAction": {
"code": "",
"confirm": true,
"getPayload": "return {\n rawSql: '',\n format: 'table',\n}",
"mode": "initial"
},
"saveDefault": {
"icon": "save",
"text": "Save Default",
"variant": "hidden"
},
"submit": {
"backgroundColor": "semi-dark-blue",
"foregroundColor": "text",
"icon": "panel-add",
"text": "Schedule",
"variant": "custom"
},
"sync": true,
"update": {
"code": "if (response && response.ok) {\n notifySuccess(['Update', 'Values updated successfully.']);\n locationService.reload();\n} else {\n notifyError(['Update', 'An error occured updating values.']);\n}",
"confirm": false,
"contentType": "application/json",
"datasource": "ControlAPI",
"getPayload": "const payload = {};\n\nelements.forEach((element) => {\n if (!element.value) {\n return;\n }\n\n payload[element.id] = element.value;\n})\n\nreturn payload;\n\n/**\n * Data Source payload\n */ \nreturn {\n rawSql: '',\n format: 'table',\n};",
"header": [],
"method": "POST",
"payloadMode": "all",
"url": "http://localhost:3200/create_tasks"
},
"updateEnabled": "auto"
},
"pluginVersion": "3.7.0",
"targets": [
{
"datasource": {
"type": "grafana-postgresql-datasource",
"uid": "bdm1onlbl3f28f"
},
"editorMode": "code",
"format": "table",
"rawQuery": true,
"rawSql": "SELECT \n id as app_id,\n name as app_name\nFROM \n apps\nWHERE\n is_archived=false;",
"refId": "APPS",
"sql": {
"columns": [
{
"parameters": [
{
"name": "*",
"type": "functionParameter"
}
],
"type": "function"
}
],
"groupBy": [
{
"property": {
"type": "string"
},
"type": "groupBy"
}
],
"limit": 50
},
"table": "apps"
},
{
"datasource": {
"type": "grafana-postgresql-datasource",
"uid": "bdm1onlbl3f28f"
},
"editorMode": "code",
"format": "table",
"hide": false,
"rawQuery": true,
"rawSql": "SELECT \n name as store_name,\n id as store_id\nFROM \n storefronts \nWHERE\n id=20 or id=21;",
"refId": "STORES",
"sql": {
"columns": [
{
"parameters": [],
"type": "function"
}
],
"groupBy": [
{
"property": {
"type": "string"
},
"type": "groupBy"
}
],
"limit": 50
}
},
{
"datasource": {
"type": "grafana-postgresql-datasource",
"uid": "bdm1onlbl3f28f"
},
"editorMode": "code",
"format": "table",
"hide": false,
"rawQuery": true,
"rawSql": "SELECT \n id as country_id,\n name as country_name,\n code as country_code\nFROM \n countries;",
"refId": "COUNTRIES",
"sql": {
"columns": [
{
"parameters": [],
"type": "function"
}
],
"groupBy": [
{
"property": {
"type": "string"
},
"type": "groupBy"
}
],
"limit": 50
}
}
],
"title": "Creating tasks",
"type": "volkovlabs-form-panel"
},
{
"datasource": {
"type": "grafana-postgresql-datasource",
"uid": "bdm1onlbl3f28f"
},
"gridPos": {
"h": 12,
"w": 8,
"x": 0,
"y": 14
},
"id": 2,
"options": {
"buttonGroup": {
"orientation": "center",
"size": "md"
},
"confirmModal": {
"body": "Please confirm to update changed values",
"cancel": "Cancel",
"columns": {
"include": [
"name",
"oldValue",
"newValue"
],
"name": "Label",
"newValue": "New Value",
"oldValue": "Old Value"
},
"confirm": "Confirm",
"title": "Confirm update request"
},
"elementValueChanged": "",
"elements": [
{
"id": "tasks_list",
"labelWidth": 10,
"options": [],
"optionsSource": "Query",
"queryOptions": {
"label": "?column?",
"source": "TASKS",
"value": "id"
},
"section": "tasks",
"showIf": "",
"title": "Tasks",
"tooltip": "",
"type": "multiselect",
"uid": "ea1daf9b-8f21-4047-9b9b-c0fd43207e00",
"unit": "",
"value": "",
"width": 75
}
],
"initial": {
"code": "elements.find(x => x.id === \"tasks_list\").value = [];",
"contentType": "application/json",
"getPayload": "return {\n rawSql: '',\n format: 'table',\n}",
"highlight": false,
"highlightColor": "red",
"method": "-"
},
"layout": {
"orientation": "vertical",
"padding": 10,
"sections": [],
"variant": "single"
},
"reset": {
"backgroundColor": "purple",
"foregroundColor": "yellow",
"icon": "process",
"text": "Reset",
"variant": "hidden"
},
"resetAction": {
"code": "if (response && response.ok) {\n notifySuccess(['Update', 'Values updated successfully.']);\n locationService.reload();\n} else {\n notifyError(['Update', 'An error occured updating values.']);\n}",
"confirm": false,
"getPayload": "return {\n rawSql: '',\n format: 'table',\n}",
"mode": "initial"
},
"saveDefault": {
"icon": "save",
"text": "Save Default",
"variant": "hidden"
},
"submit": {
"backgroundColor": "purple",
"foregroundColor": "yellow",
"icon": "trash-alt",
"text": "Deactivate",
"variant": "destructive"
},
"sync": true,
"update": {
"code": "if (response && response.ok) {\n notifySuccess(['Update', 'Values updated successfully.']);\n locationService.reload();\n} else {\n notifyError(['Update', 'An error occured updating values.']);\n}",
"confirm": false,
"contentType": "application/json",
"getPayload": "return elements.find(x => x.id === \"tasks_list\").value;",
"method": "POST",
"payloadMode": "custom",
"url": "http://localhost:3200/remove_tasks"
},
"updateEnabled": "auto"
},
"pluginVersion": "3.7.0",
"targets": [
{
"datasource": {
"type": "grafana-postgresql-datasource",
"uid": "bdm1onlbl3f28f"
},
"editorMode": "code",
"format": "table",
"rawQuery": true,
"rawSql": "select\n\ttasks.id,\n\tapps.id as app_id,\n\tstorefronts.id as store_id,\n\tcountries.id as country_id,\n\t'['||apps.name||']: <'||storefronts.name||'> ('||countries.name||') '||words.value\nfrom \n\ttasks\ninner join apps on apps.id = tasks.app_id\ninner join words on words.id = tasks.word_id\ninner join storefronts on storefronts.id = tasks.store_id\ninner join countries on countries.id = tasks.country_id\nwhere tasks.is_active = true;",
"refId": "TASKS",
"sql": {
"columns": [
{
"parameters": [],
"type": "function"
}
],
"groupBy": [
{
"property": {
"type": "string"
},
"type": "groupBy"
}
],
"limit": 50
}
}
],
"title": "Deactivate tasks",
"type": "volkovlabs-form-panel"
}
],
"schemaVersion": 39,
"tags": [],
"templating": {
"list": []
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "Task management",
"uid": "edm24872sj474a",
"version": 12,
"weekStart": ""
}
⌃
At this point, we have a typical ASO tool that can monitor certain keywords. However, we want deeper insights from the data we gather. For example, it would be helpful to see which keywords increase downloads, conversion, or retention, allowing us to understand if our app is visible to our target audience.
We could build our own analytics tool, or we could use an existing free one. Here, I'll show you how to connect Amplitude. For this, we need another plugin, yesoreyeram-infinity-datasource, which allows us to use a REST API as a data source.
For its configuration, we only need to set up authorization. Use Amplitude's API Key as the login and Secret Key as the password (both of which can be found in Amplitude under Settings -> Organization Settings -> Projects -> <Your Project> -> General):

Next, create another panel and use this new data source. We'll integrate statistics of daily new users. For this, use the users request with the following parameters (full documentation on the Amplitude API can be found here, and on Grafana's variables here):
start - beginning of the time range (${__from:date:YYYYMMDD})end - end of the time range (${__to:date:YYYYMMDD})m - new for new userss - segment definition ([{"prop":"country","op":"is","values":["United States"]}])As a result, we'll get something like this:

We've built an ASO tool with a bit of code, making it extendable and entirely free (except for your electricity bills). It can run locally on your machine, which is perfectly fine for indie developers or small teams.
This tool allows you to gather data about the search ranks of keywords in the App Store for any app you want to monitor. You can customize your data processing, visualization, and alerting in Grafana. Additionally, you can integrate custom data sources and analytics to calculate correlations and gain deeper insights.
Of course, this is just a basic tool with plenty of room for improvements:
Thank you for reading. I hope this article helps you. If you have any feedback or corrections, please share.
George Ostrobrod, 2024