Compare commits
24 Commits
4e091002a7
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
801b6ea49d | ||
|
|
21652acb03 | ||
|
|
d6c4db5e06 | ||
|
|
8892299c8e | ||
|
|
8f69a24d67 | ||
|
|
4e2825ffe2 | ||
|
|
6b6d7af07d | ||
|
|
d3c1cc4cc7 | ||
|
|
855e71eeaa | ||
|
|
bc5f99e768 | ||
|
|
fbf65c0ac0 | ||
|
|
052826e3a1 | ||
|
|
be0ec13d3e | ||
|
|
63a730a940 | ||
|
|
17bb921941 | ||
|
|
21a9ac6c86 | ||
|
|
1f3505f132 | ||
|
|
b8d38f52a2 | ||
|
|
294a84a414 | ||
|
|
0bda151abe | ||
|
|
0877db4c36 | ||
|
|
aee56753d5 | ||
|
|
e758749bd6 | ||
|
|
7a253e0f0c |
15
BUILD.md
15
BUILD.md
@@ -12,29 +12,29 @@
|
||||
cloudron build \
|
||||
--set-build-service builder.docker.due.ren \
|
||||
--build-service-token e3265de06b1d0e7bb38400539012a8433a74c2c96a17955e \
|
||||
--set-repository andreasdueren/ente-cloudron \
|
||||
--tag 0.1.0
|
||||
--set-repository andreasdueren/affine-cloudron \
|
||||
--tag 0.25.5-6
|
||||
```
|
||||
|
||||
## Deployment Steps
|
||||
1. Remove any previous dev install of AFFiNE on the Cloudron (always reinstall from scratch).
|
||||
2. Install the freshly built image:
|
||||
```bash
|
||||
cloudron install --location ente.due.ren --image andreasdueren/ente-cloudron:0.1.0
|
||||
cloudron install --location affine.due.ren --image andreasdueren/affine-cloudron:0.25.5-6
|
||||
```
|
||||
3. When prompted, confirm the app info and wait for Cloudron to report success (abort after ~30 seconds if installation stalls or errors to avoid hanging sessions).
|
||||
4. Visit `https://ente.due.ren` (or the chosen location) and sign in using Cloudron SSO.
|
||||
4. Visit `https://affine.due.ren` (or the chosen location) and sign in using Cloudron SSO.
|
||||
|
||||
## Testing Checklist
|
||||
- Open the app dashboard and ensure the landing page loads without 502/504 errors.
|
||||
- Create a workspace, add a note, upload an asset, and reload to confirm `/app/data` persistence.
|
||||
- Trigger OIDC login/logout flows to verify Cloudron SSO works (callback `/api/v1/session/callback`).
|
||||
- Trigger OIDC login/logout flows to verify Cloudron SSO works (callback `/oauth/callback`).
|
||||
- Send an invitation email to validate SMTP credentials wired from the Cloudron mail addon.
|
||||
- Inspect logs with `cloudron logs --app ente.due.ren -f` for migration output from `scripts/self-host-predeploy.js`.
|
||||
- Inspect logs with `cloudron logs --app affine.due.ren -f` for migration output from `scripts/self-host-predeploy.js`.
|
||||
|
||||
## Troubleshooting
|
||||
- **Migrations hang**: restart the app; migrations rerun automatically before the server starts. Check PostgreSQL reachability via `cloudron exec --app <id> -- env | grep DATABASE_URL`.
|
||||
- **Login issues**: confirm the Cloudron OIDC client is enabled for the app and that the callback URL `/api/v1/session/callback` is allowed.
|
||||
- **Login issues**: confirm the Cloudron OIDC client is enabled for the app and that the callback URL `/oauth/callback` is allowed.
|
||||
- **Email failures**: verify the Cloudron sendmail addon is provisioned and that `MAILER_*` env vars show up inside the container (`cloudron exec --app <id> -- env | grep MAILER`).
|
||||
- **Large uploads rejected**: adjust `client_max_body_size` in `nginx.conf` if you routinely exceed 200 MB assets, then rebuild.
|
||||
|
||||
@@ -42,3 +42,4 @@ cloudron build \
|
||||
- Persistent config lives in `/app/data/config/config.json`. Modify values (e.g., Stripe, throttling) and restart the app; the file is backed up by Cloudron.
|
||||
- Uploaded files live in `/app/data/storage` and map to `~/.affine/storage` inside the runtime.
|
||||
- Default health check hits `/api/healthz`; customize via `CloudronManifest.json` if upstream changes.
|
||||
- Copilot API keys live in `/app/data/config/config.json`. The default file contains placeholders (`sk-provide-openai-key-here`, etc.); open it in the Cloudron File Manager or via `cloudron exec` to paste your own values, then restart the app so AFFiNE picks up the changes.
|
||||
|
||||
@@ -5,21 +5,24 @@
|
||||
"description": "Next-gen knowledge base that blends docs, whiteboards, and databases for self-hosted teams.",
|
||||
"website": "https://affine.pro",
|
||||
"contactEmail": "support@affine.pro",
|
||||
"version": "0.1.6",
|
||||
"changelog": "Initial Cloudron packaging",
|
||||
"version": "0.25.5-6",
|
||||
"upstreamVersion": "0.25.5",
|
||||
"changelog": "Configure copilot via config.json placeholders; env overrides removed",
|
||||
"icon": "file://icon.png",
|
||||
"manifestVersion": 2,
|
||||
"minBoxVersion": "7.0.0",
|
||||
"httpPort": 3000,
|
||||
"configurePath": "/admin",
|
||||
"optionalSso": true,
|
||||
"addons": {
|
||||
"localstorage": {},
|
||||
"postgresql": {},
|
||||
"redis": {},
|
||||
"sendmail": {},
|
||||
"oidc": {
|
||||
"redirectUris": [
|
||||
"/api/v1/session/callback"
|
||||
],
|
||||
"loginRedirectUri": "/api/v1/session/callback"
|
||||
"loginRedirectUri": "/oauth/callback",
|
||||
"logoutRedirectUri": "/",
|
||||
"tokenSignatureAlgorithm": "RS256"
|
||||
}
|
||||
},
|
||||
"memoryLimit": 2147483648,
|
||||
|
||||
25
Dockerfile
25
Dockerfile
@@ -1,4 +1,5 @@
|
||||
FROM ghcr.io/toeverything/affine:stable AS upstream
|
||||
ARG AFFINE_VERSION=0.25.5
|
||||
FROM ghcr.io/toeverything/affine:${AFFINE_VERSION} AS upstream
|
||||
|
||||
FROM cloudron/base:5.0.0
|
||||
|
||||
@@ -13,9 +14,21 @@ ENV APP_CODE_DIR=/app/code \
|
||||
|
||||
RUN mkdir -p "$APP_CODE_DIR" "$APP_DATA_DIR" "$APP_RUNTIME_DIR" "$APP_TMP_DIR" && \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends jq python3 ca-certificates curl openssl libjemalloc2 && \
|
||||
apt-get install -y --no-install-recommends jq python3 ca-certificates curl openssl libjemalloc2 postgresql-client mysql-client && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN curl -fsSL https://repo.manticoresearch.com/GPG-KEY-manticore > /tmp/manticore.key && \
|
||||
curl -fsSL https://repo.manticoresearch.com/GPG-KEY-SHA256-manticore >> /tmp/manticore.key && \
|
||||
gpg --dearmor -o /usr/share/keyrings/manticore.gpg /tmp/manticore.key && \
|
||||
rm /tmp/manticore.key && \
|
||||
echo "deb [signed-by=/usr/share/keyrings/manticore.gpg] https://repo.manticoresearch.com/repository/manticoresearch_jammy/ jammy main" > /etc/apt/sources.list.d/manticore.list && \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends manticore manticore-extra && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN ln -sf /usr/share/manticore/modules/manticore-buddy/bin/manticore-buddy /usr/bin/manticore-buddy
|
||||
RUN chown -R cloudron:cloudron /usr/share/manticore
|
||||
|
||||
# bring in the upstream runtime and packaged server artifacts
|
||||
COPY --from=upstream /usr/local /usr/local
|
||||
COPY --from=upstream /opt /opt
|
||||
@@ -24,13 +37,17 @@ COPY --from=upstream /app "$APP_BUILD_DIR"
|
||||
# configuration, launch scripts, and defaults
|
||||
COPY start.sh "$APP_CODE_DIR/start.sh"
|
||||
COPY run-affine.sh "$APP_CODE_DIR/run-affine.sh"
|
||||
COPY run-manticore.sh "$APP_CODE_DIR/run-manticore.sh"
|
||||
COPY run-buddy.sh "$APP_CODE_DIR/run-buddy.sh"
|
||||
COPY nginx.conf "$APP_CODE_DIR/nginx.conf"
|
||||
COPY supervisord.conf "$APP_CODE_DIR/supervisord.conf"
|
||||
COPY config.example.json "$APP_CODE_DIR/config.example.json"
|
||||
COPY tmp_data/ "$APP_TMP_DIR/"
|
||||
COPY manticore/ "$APP_CODE_DIR/manticore/"
|
||||
|
||||
RUN chmod +x "$APP_CODE_DIR/start.sh" "$APP_CODE_DIR/run-affine.sh" && \
|
||||
chown -R cloudron:cloudron "$APP_CODE_DIR" "$APP_DATA_DIR" "$APP_RUNTIME_DIR" "$APP_TMP_DIR"
|
||||
RUN chmod +x "$APP_CODE_DIR/start.sh" "$APP_CODE_DIR/run-affine.sh" "$APP_CODE_DIR/run-manticore.sh" "$APP_CODE_DIR/run-buddy.sh" && \
|
||||
chown cloudron:cloudron "$APP_CODE_DIR/start.sh" "$APP_CODE_DIR/run-affine.sh" "$APP_CODE_DIR/run-manticore.sh" "$APP_CODE_DIR/run-buddy.sh" && \
|
||||
chown -R cloudron:cloudron "$APP_DATA_DIR" "$APP_RUNTIME_DIR" "$APP_TMP_DIR"
|
||||
|
||||
EXPOSE 3000
|
||||
CMD ["/app/code/start.sh"]
|
||||
|
||||
@@ -2,5 +2,38 @@
|
||||
"$schema": "https://github.com/toeverything/AFFiNE/releases/latest/download/config.schema.json",
|
||||
"server": {
|
||||
"name": "AFFiNE Self Hosted Server"
|
||||
},
|
||||
"copilot": {
|
||||
"enabled": true,
|
||||
"scenarios": {
|
||||
"override_enabled": true,
|
||||
"scenarios": {
|
||||
"audio_transcribing": "gemini-2.5-flash",
|
||||
"chat": "gemini-2.5-flash",
|
||||
"embedding": "gemini-embedding-001",
|
||||
"image": "gpt-image-1",
|
||||
"rerank": "gpt-4.1",
|
||||
"coding": "claude-sonnet-4-5@20250929",
|
||||
"complex_text_generation": "gpt-4o-2024-08-06",
|
||||
"quick_decision_making": "gpt-5-mini",
|
||||
"quick_text_generation": "gemini-2.5-flash",
|
||||
"polish_and_summarize": "gemini-2.5-flash"
|
||||
}
|
||||
},
|
||||
"providers.openai": {
|
||||
"apiKey": "sk-provide-openai-key-here",
|
||||
"baseURL": "https://api.openai.com/v1"
|
||||
},
|
||||
"providers.anthropic": {
|
||||
"apiKey": "sk-ant-provide-anthropic-key-here",
|
||||
"baseURL": "https://api.anthropic.com/v1"
|
||||
},
|
||||
"providers.gemini": {
|
||||
"apiKey": "provide-gemini-key-here",
|
||||
"baseURL": "https://generativelanguage.googleapis.com/v1beta"
|
||||
},
|
||||
"exa": {
|
||||
"key": "provide-exa-key-or-leave-empty"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
23
manticore/block.sql
Normal file
23
manticore/block.sql
Normal file
@@ -0,0 +1,23 @@
|
||||
CREATE TABLE IF NOT EXISTS block (
|
||||
workspace_id string attribute,
|
||||
doc_id string attribute,
|
||||
block_id string attribute,
|
||||
content text,
|
||||
flavour string attribute,
|
||||
flavour_indexed string attribute indexed,
|
||||
blob string attribute indexed,
|
||||
ref_doc_id string attribute indexed,
|
||||
ref string stored,
|
||||
parent_flavour string attribute,
|
||||
parent_flavour_indexed string attribute indexed,
|
||||
parent_block_id string attribute,
|
||||
parent_block_id_indexed string attribute indexed,
|
||||
additional string stored,
|
||||
markdown_preview string stored,
|
||||
created_by_user_id string attribute,
|
||||
updated_by_user_id string attribute,
|
||||
created_at timestamp,
|
||||
updated_at timestamp
|
||||
)
|
||||
charset_table = 'non_cjk, cjk'
|
||||
index_field_lengths = '1';
|
||||
13
manticore/doc.sql
Normal file
13
manticore/doc.sql
Normal file
@@ -0,0 +1,13 @@
|
||||
CREATE TABLE IF NOT EXISTS doc (
|
||||
workspace_id string attribute,
|
||||
doc_id string attribute,
|
||||
title text,
|
||||
summary string stored,
|
||||
journal string stored,
|
||||
created_by_user_id string attribute,
|
||||
updated_by_user_id string attribute,
|
||||
created_at timestamp,
|
||||
updated_at timestamp
|
||||
)
|
||||
charset_table = 'non_cjk, cjk'
|
||||
index_field_lengths = '1';
|
||||
23
manticore/manticore.conf
Normal file
23
manticore/manticore.conf
Normal file
@@ -0,0 +1,23 @@
|
||||
#
|
||||
# Manticore Search configuration for AFFiNE on Cloudron.
|
||||
# Keeps all runtime state under /app/data so backups include the indexer data.
|
||||
#
|
||||
|
||||
common {
|
||||
plugin_dir = /app/data/manticore/plugins
|
||||
}
|
||||
|
||||
searchd {
|
||||
listen = 127.0.0.1:9306:mysql41
|
||||
listen = 127.0.0.1:9308:http
|
||||
listen = 127.0.0.1:9312
|
||||
listen = /run/manticore/manticore.sock:mysql41
|
||||
|
||||
log = /app/data/manticore/logs/searchd.log
|
||||
query_log = /app/data/manticore/logs/query.log
|
||||
binlog_path = /app/data/manticore/binlog
|
||||
data_dir = /app/data/manticore/data
|
||||
pid_file = /run/manticore/searchd.pid
|
||||
mysql_version_string = 8.0.33
|
||||
buddy_path = /app/code/run-buddy.sh --disable-telemetry
|
||||
}
|
||||
@@ -17,6 +17,8 @@ http {
|
||||
client_body_temp_path /run/nginx/body;
|
||||
proxy_temp_path /run/nginx/proxy;
|
||||
fastcgi_temp_path /run/nginx/fastcgi;
|
||||
uwsgi_temp_path /run/nginx/uwsgi;
|
||||
scgi_temp_path /run/nginx/scgi;
|
||||
|
||||
log_format cloudron '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
@@ -36,7 +38,7 @@ http {
|
||||
server {
|
||||
listen 3000;
|
||||
server_name _;
|
||||
client_max_body_size 200m;
|
||||
client_max_body_size 600m;
|
||||
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
198
run-affine.sh
198
run-affine.sh
@@ -89,7 +89,9 @@ PY
|
||||
else
|
||||
export AFFINE_SERVER_HTTPS=false
|
||||
fi
|
||||
export AFFINE_INDEXER_ENABLED="${AFFINE_INDEXER_ENABLED:-false}"
|
||||
export AFFINE_INDEXER_ENABLED="${AFFINE_INDEXER_ENABLED:-true}"
|
||||
export AFFINE_INDEXER_SEARCH_PROVIDER="${AFFINE_INDEXER_SEARCH_PROVIDER:-manticoresearch}"
|
||||
export AFFINE_INDEXER_SEARCH_ENDPOINT="${AFFINE_INDEXER_SEARCH_ENDPOINT:-http://127.0.0.1:9308}"
|
||||
}
|
||||
|
||||
ensure_runtime_envs() {
|
||||
@@ -99,9 +101,203 @@ ensure_runtime_envs() {
|
||||
ensure_server_env
|
||||
}
|
||||
|
||||
# Helper to parse indexer endpoint into host/port for readiness checks
|
||||
wait_for_indexer() {
|
||||
if [ "${AFFINE_INDEXER_ENABLED:-false}" != "true" ]; then
|
||||
return
|
||||
fi
|
||||
local endpoint="${AFFINE_INDEXER_SEARCH_ENDPOINT:-}"
|
||||
if [ -z "$endpoint" ]; then
|
||||
return
|
||||
fi
|
||||
log "Waiting for indexer endpoint ${endpoint}"
|
||||
if python3 - "$endpoint" <<'PY'; then
|
||||
import socket
|
||||
import sys
|
||||
import time
|
||||
from urllib.parse import urlparse
|
||||
|
||||
endpoint = sys.argv[1]
|
||||
if not endpoint.startswith(('http://', 'https://')):
|
||||
endpoint = 'http://' + endpoint
|
||||
parsed = urlparse(endpoint)
|
||||
host = parsed.hostname
|
||||
port = parsed.port or (443 if parsed.scheme == 'https' else 80)
|
||||
if not host or not port:
|
||||
sys.exit(1)
|
||||
|
||||
for _ in range(60):
|
||||
try:
|
||||
with socket.create_connection((host, port), timeout=2):
|
||||
sys.exit(0)
|
||||
except OSError:
|
||||
time.sleep(1)
|
||||
sys.exit(1)
|
||||
PY
|
||||
log "Indexer is ready"
|
||||
else
|
||||
log "Indexer at ${endpoint} not reachable after waiting, continuing startup"
|
||||
fi
|
||||
}
|
||||
|
||||
seed_manticore_tables() {
|
||||
local sql_dir="${APP_CODE_DIR:-/app/code}/manticore"
|
||||
if [ ! -d "$sql_dir" ]; then
|
||||
log "Manticore SQL directory ${sql_dir} missing; skipping table seed"
|
||||
return
|
||||
fi
|
||||
if ! command -v mysql >/dev/null 2>&1; then
|
||||
log "mysql client not found; cannot seed Manticore tables"
|
||||
return
|
||||
fi
|
||||
local mysql_cmd=(mysql -h 127.0.0.1 -P 9306)
|
||||
for table in doc block; do
|
||||
local sql_file="$sql_dir/${table}.sql"
|
||||
if [ ! -f "$sql_file" ]; then
|
||||
continue
|
||||
fi
|
||||
if "${mysql_cmd[@]}" < "$sql_file" >/dev/null 2>&1; then
|
||||
log "Ensured Manticore table ${table}"
|
||||
else
|
||||
log "WARNING: Failed to apply ${sql_file} to Manticore"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
patch_upload_limits() {
|
||||
local target="$APP_DIR/dist/main.js"
|
||||
if [ ! -f "$target" ]; then
|
||||
return
|
||||
fi
|
||||
python3 - "$target" <<'PY'
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
target = Path(sys.argv[1])
|
||||
data = target.read_text()
|
||||
updated = data
|
||||
updated = updated.replace("limit: 100 * OneMB", "limit: 512 * OneMB", 1)
|
||||
updated = updated.replace("maxFileSize: 100 * OneMB", "maxFileSize: 512 * OneMB", 1)
|
||||
if updated != data:
|
||||
target.write_text(updated)
|
||||
PY
|
||||
}
|
||||
|
||||
grant_team_plan_features() {
|
||||
log "Ensuring self-hosted workspaces have team plan features"
|
||||
node <<'NODE'
|
||||
const { PrismaClient } = require('@prisma/client');
|
||||
|
||||
const prisma = new PrismaClient();
|
||||
|
||||
async function main() {
|
||||
const feature = await prisma.feature.findFirst({
|
||||
where: { name: 'team_plan_v1' },
|
||||
orderBy: { deprecatedVersion: 'desc' },
|
||||
});
|
||||
if (!feature) {
|
||||
console.warn('[team-plan] Feature record not found, skipping');
|
||||
return;
|
||||
}
|
||||
|
||||
const workspaces = await prisma.workspace.findMany({
|
||||
select: { id: true },
|
||||
});
|
||||
|
||||
for (const { id } of workspaces) {
|
||||
const existing = await prisma.workspaceFeature.findFirst({
|
||||
where: {
|
||||
workspaceId: id,
|
||||
name: 'team_plan_v1',
|
||||
activated: true,
|
||||
},
|
||||
});
|
||||
if (existing) continue;
|
||||
|
||||
await prisma.workspaceFeature.create({
|
||||
data: {
|
||||
workspaceId: id,
|
||||
featureId: feature.id,
|
||||
name: 'team_plan_v1',
|
||||
type: feature.deprecatedType ?? 1,
|
||||
configs: feature.configs,
|
||||
reason: 'selfhost-default',
|
||||
activated: true,
|
||||
},
|
||||
});
|
||||
console.log(`[team-plan] Granted team plan to workspace ${id}`);
|
||||
}
|
||||
|
||||
await prisma.$executeRawUnsafe(`
|
||||
CREATE OR REPLACE FUNCTION grant_team_plan_feature()
|
||||
RETURNS TRIGGER AS $$
|
||||
DECLARE
|
||||
feature_id integer;
|
||||
feature_type integer;
|
||||
feature_configs jsonb;
|
||||
BEGIN
|
||||
SELECT id, type, configs
|
||||
INTO feature_id, feature_type, feature_configs
|
||||
FROM features
|
||||
WHERE feature = 'team_plan_v1'
|
||||
ORDER BY version DESC
|
||||
LIMIT 1;
|
||||
|
||||
IF feature_id IS NULL THEN
|
||||
RETURN NEW;
|
||||
END IF;
|
||||
|
||||
INSERT INTO workspace_features
|
||||
(workspace_id, feature_id, name, type, configs, reason, activated, created_at)
|
||||
SELECT
|
||||
NEW.id,
|
||||
feature_id,
|
||||
'team_plan_v1',
|
||||
feature_type,
|
||||
feature_configs,
|
||||
'selfhost-default',
|
||||
TRUE,
|
||||
NOW()
|
||||
WHERE NOT EXISTS (
|
||||
SELECT 1 FROM workspace_features
|
||||
WHERE workspace_id = NEW.id AND name = 'team_plan_v1' AND activated = TRUE
|
||||
);
|
||||
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
`);
|
||||
|
||||
await prisma.$executeRawUnsafe(`
|
||||
DO $$ BEGIN
|
||||
CREATE TRIGGER grant_team_plan_feature_trigger
|
||||
AFTER INSERT ON workspaces
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION grant_team_plan_feature();
|
||||
EXCEPTION
|
||||
WHEN duplicate_object THEN NULL;
|
||||
END $$;
|
||||
`);
|
||||
}
|
||||
|
||||
main()
|
||||
.then(() => console.log('[team-plan] Workspace quota ensured'))
|
||||
.catch(err => {
|
||||
console.error('[team-plan] Failed to grant features', err);
|
||||
})
|
||||
.finally(async () => {
|
||||
await prisma.$disconnect();
|
||||
});
|
||||
NODE
|
||||
}
|
||||
|
||||
log "Running AFFiNE pre-deployment migrations"
|
||||
ensure_runtime_envs
|
||||
wait_for_indexer
|
||||
seed_manticore_tables
|
||||
node ./scripts/self-host-predeploy.js
|
||||
patch_upload_limits
|
||||
grant_team_plan_features
|
||||
|
||||
log "Starting AFFiNE server"
|
||||
exec node ./dist/main.js
|
||||
|
||||
13
run-buddy.sh
Normal file
13
run-buddy.sh
Normal file
@@ -0,0 +1,13 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
ENV_EXPORT_FILE=${ENV_EXPORT_FILE:-/run/affine/runtime.env}
|
||||
|
||||
if [ -f "$ENV_EXPORT_FILE" ]; then
|
||||
set -a
|
||||
# shellcheck disable=SC1090
|
||||
source "$ENV_EXPORT_FILE"
|
||||
set +a
|
||||
fi
|
||||
|
||||
exec /usr/bin/manticore-buddy "$@"
|
||||
10
run-manticore.sh
Normal file
10
run-manticore.sh
Normal file
@@ -0,0 +1,10 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
MANTICORE_CONF=${MANTICORE_CONF:-/app/data/manticore/manticore.conf}
|
||||
MANTICORE_RUN_DIR=${MANTICORE_RUN_DIR:-/run/manticore}
|
||||
|
||||
mkdir -p "$MANTICORE_RUN_DIR"
|
||||
rm -f "$MANTICORE_RUN_DIR/searchd.pid"
|
||||
|
||||
exec /usr/bin/searchd --nodetach -c "$MANTICORE_CONF"
|
||||
271
start.sh
271
start.sh
@@ -9,12 +9,24 @@ APP_BUILD_DIR=${APP_BUILD_DIR:-/app/code/affine}
|
||||
APP_HOME_DIR=${APP_HOME_DIR:-/app/data/home}
|
||||
AFFINE_HOME=${AFFINE_HOME:-$APP_HOME_DIR/.affine}
|
||||
ENV_EXPORT_FILE=${ENV_EXPORT_FILE:-$APP_RUNTIME_DIR/runtime.env}
|
||||
export APP_CODE_DIR APP_DATA_DIR APP_RUNTIME_DIR APP_TMP_DIR APP_BUILD_DIR APP_HOME_DIR AFFINE_HOME ENV_EXPORT_FILE
|
||||
MANTICORE_DATA_DIR=${MANTICORE_DATA_DIR:-$APP_DATA_DIR/manticore}
|
||||
MANTICORE_CONFIG_FILE=${MANTICORE_CONFIG_FILE:-$MANTICORE_DATA_DIR/manticore.conf}
|
||||
MANTICORE_HTTP_ENDPOINT=${MANTICORE_HTTP_ENDPOINT:-http://127.0.0.1:9308}
|
||||
export APP_CODE_DIR APP_DATA_DIR APP_RUNTIME_DIR APP_TMP_DIR APP_BUILD_DIR APP_HOME_DIR AFFINE_HOME ENV_EXPORT_FILE \
|
||||
MANTICORE_DATA_DIR MANTICORE_CONFIG_FILE MANTICORE_HTTP_ENDPOINT
|
||||
|
||||
log() {
|
||||
printf '[%s] %s\n' "$(date --iso-8601=seconds)" "$*"
|
||||
}
|
||||
|
||||
record_env_var() {
|
||||
local name="$1"
|
||||
local value="$2"
|
||||
if [ -n "$value" ]; then
|
||||
printf '%s=%q\n' "$name" "$value" >> "$ENV_EXPORT_FILE"
|
||||
fi
|
||||
}
|
||||
|
||||
require_env() {
|
||||
local var_name="$1"
|
||||
if [ -z "${!var_name:-}" ]; then
|
||||
@@ -26,7 +38,8 @@ require_env() {
|
||||
prepare_data_dirs() {
|
||||
log "Preparing persistent directories"
|
||||
mkdir -p "$APP_DATA_DIR/config" "$APP_DATA_DIR/storage" "$APP_DATA_DIR/logs" "$APP_RUNTIME_DIR" "$APP_HOME_DIR" "$AFFINE_HOME"
|
||||
mkdir -p /run/nginx/body /run/nginx/proxy /run/nginx/fastcgi
|
||||
mkdir -p /run/nginx/body /run/nginx/proxy /run/nginx/fastcgi /run/nginx/uwsgi /run/nginx/scgi
|
||||
: > "$ENV_EXPORT_FILE"
|
||||
|
||||
if [ ! -f "$APP_DATA_DIR/config/config.json" ]; then
|
||||
log "Seeding default configuration"
|
||||
@@ -48,6 +61,46 @@ prepare_data_dirs() {
|
||||
chown -R cloudron:cloudron "$APP_DATA_DIR" "$APP_RUNTIME_DIR" "$APP_HOME_DIR"
|
||||
}
|
||||
|
||||
prepare_manticore() {
|
||||
log "Preparing Manticore data directory"
|
||||
local buddy_plugins_dir="$MANTICORE_DATA_DIR/plugins/buddy-plugins"
|
||||
mkdir -p "$MANTICORE_DATA_DIR"/{data,binlog,logs,buddy} "$buddy_plugins_dir"
|
||||
cp "$APP_CODE_DIR/manticore/manticore.conf" "$MANTICORE_CONFIG_FILE"
|
||||
local composer_file="$buddy_plugins_dir/composer.json"
|
||||
if [ ! -f "$composer_file" ]; then
|
||||
cat > "$composer_file" <<'JSON'
|
||||
{
|
||||
"require": {},
|
||||
"minimum-stability": "dev"
|
||||
}
|
||||
JSON
|
||||
fi
|
||||
local system_buddy_plugins="/usr/share/manticore/modules/manticore-buddy/buddy-plugins"
|
||||
if [ ! -L "$system_buddy_plugins" ] && [ -w "/usr/share/manticore/modules/manticore-buddy" ]; then
|
||||
rm -rf "$system_buddy_plugins"
|
||||
ln -s "$buddy_plugins_dir" "$system_buddy_plugins"
|
||||
elif [ ! -L "$system_buddy_plugins" ]; then
|
||||
log "Buddy modules directory is read-only; skipping symlink to ${buddy_plugins_dir}"
|
||||
fi
|
||||
mkdir -p /run/manticore
|
||||
chown -R cloudron:cloudron "$MANTICORE_DATA_DIR" /run/manticore
|
||||
record_env_var MANTICORE_CONFIG_FILE "$MANTICORE_CONFIG_FILE"
|
||||
record_env_var MANTICORE_HTTP_ENDPOINT "$MANTICORE_HTTP_ENDPOINT"
|
||||
}
|
||||
|
||||
prepare_runtime_build_dir() {
|
||||
local source_dir="$APP_BUILD_DIR"
|
||||
local runtime_build_dir="$APP_RUNTIME_DIR/affine-build"
|
||||
log "Syncing AFFiNE runtime into $runtime_build_dir"
|
||||
rm -rf "$runtime_build_dir"
|
||||
mkdir -p "$runtime_build_dir"
|
||||
cp -a "$source_dir/." "$runtime_build_dir/"
|
||||
chown -R cloudron:cloudron "$runtime_build_dir"
|
||||
APP_BUILD_DIR="$runtime_build_dir"
|
||||
export APP_BUILD_DIR
|
||||
record_env_var APP_BUILD_DIR "$APP_BUILD_DIR"
|
||||
}
|
||||
|
||||
configure_database() {
|
||||
require_env CLOUDRON_POSTGRESQL_URL
|
||||
local db_url="$CLOUDRON_POSTGRESQL_URL"
|
||||
@@ -55,9 +108,27 @@ configure_database() {
|
||||
db_url="postgresql://${db_url#postgres://}"
|
||||
fi
|
||||
export DATABASE_URL="$db_url"
|
||||
record_env_var DATABASE_URL "$db_url"
|
||||
log "Configured PostgreSQL endpoint"
|
||||
}
|
||||
|
||||
ensure_pgvector_extension() {
|
||||
if [ -z "${DATABASE_URL:-}" ]; then
|
||||
log "DATABASE_URL not set; skipping pgvector extension check"
|
||||
return
|
||||
fi
|
||||
if ! command -v psql >/dev/null 2>&1; then
|
||||
log "psql client unavailable; cannot verify pgvector extension"
|
||||
return
|
||||
fi
|
||||
log "Ensuring pgvector extension exists"
|
||||
if psql "$DATABASE_URL" -v ON_ERROR_STOP=1 -c "CREATE EXTENSION IF NOT EXISTS vector;" >/dev/null 2>&1; then
|
||||
log "pgvector extension ready"
|
||||
else
|
||||
log "WARNING: Failed to create pgvector extension automatically. Ensure it exists for AI embeddings."
|
||||
fi
|
||||
}
|
||||
|
||||
configure_redis() {
|
||||
require_env CLOUDRON_REDIS_URL
|
||||
local redis_info
|
||||
@@ -65,18 +136,44 @@ configure_redis() {
|
||||
import os
|
||||
from urllib.parse import urlparse
|
||||
url = os.environ.get('CLOUDRON_REDIS_URL')
|
||||
if not url:
|
||||
raise SystemExit('redis url missing')
|
||||
parsed = urlparse(url)
|
||||
parsed = urlparse(url) if url else None
|
||||
host = os.environ.get('CLOUDRON_REDIS_HOST')
|
||||
port = os.environ.get('CLOUDRON_REDIS_PORT')
|
||||
password = os.environ.get('CLOUDRON_REDIS_PASSWORD')
|
||||
username = os.environ.get('CLOUDRON_REDIS_USERNAME')
|
||||
db = os.environ.get('CLOUDRON_REDIS_DB')
|
||||
if not host and parsed:
|
||||
host = parsed.hostname or 'localhost'
|
||||
if not port and parsed:
|
||||
port = parsed.port or 6379
|
||||
if not password and parsed:
|
||||
password = parsed.password or ''
|
||||
if not db and parsed:
|
||||
db = (parsed.path or '/0').lstrip('/') or '0'
|
||||
username = parsed.username or ''
|
||||
if username is None:
|
||||
username = parsed.username if parsed and parsed.username else 'default'
|
||||
host = host or 'localhost'
|
||||
port = port or 6379
|
||||
password = password or ''
|
||||
db = db or '0'
|
||||
print(f"{host}\n{port}\n{password}\n{db}\n{username}")
|
||||
PY
|
||||
)
|
||||
IFS=$'\n' read -r host port password db username <<<"$redis_info"
|
||||
if [ -n "${CLOUDRON_REDIS_HOST:-}" ]; then
|
||||
host="$CLOUDRON_REDIS_HOST"
|
||||
fi
|
||||
if [ -n "${CLOUDRON_REDIS_PORT:-}" ]; then
|
||||
port="$CLOUDRON_REDIS_PORT"
|
||||
fi
|
||||
if [ -n "${CLOUDRON_REDIS_PASSWORD:-}" ]; then
|
||||
password="$CLOUDRON_REDIS_PASSWORD"
|
||||
fi
|
||||
if [ -n "${CLOUDRON_REDIS_USERNAME:-}" ]; then
|
||||
username="$CLOUDRON_REDIS_USERNAME"
|
||||
elif [ -z "$username" ]; then
|
||||
username="default"
|
||||
fi
|
||||
export REDIS_SERVER_HOST="$host"
|
||||
export REDIS_SERVER_PORT="$port"
|
||||
export REDIS_SERVER_PASSWORD="$password"
|
||||
@@ -84,20 +181,79 @@ PY
|
||||
export REDIS_SERVER_USERNAME="$username"
|
||||
export REDIS_URL="$CLOUDRON_REDIS_URL"
|
||||
export REDIS_SERVER_URL="$CLOUDRON_REDIS_URL"
|
||||
record_env_var REDIS_SERVER_HOST "$host"
|
||||
record_env_var REDIS_SERVER_PORT "$port"
|
||||
record_env_var REDIS_SERVER_PASSWORD "$password"
|
||||
record_env_var REDIS_SERVER_DATABASE "$db"
|
||||
record_env_var REDIS_SERVER_USERNAME "$username"
|
||||
record_env_var REDIS_URL "$CLOUDRON_REDIS_URL"
|
||||
record_env_var REDIS_SERVER_URL "$CLOUDRON_REDIS_URL"
|
||||
python3 - <<'PY'
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
config_path = Path(os.environ['APP_DATA_DIR']) / 'config' / 'config.json'
|
||||
data = json.loads(config_path.read_text())
|
||||
redis = data.setdefault('redis', {})
|
||||
redis['host'] = os.environ.get('REDIS_SERVER_HOST', '')
|
||||
redis['port'] = int(os.environ.get('REDIS_SERVER_PORT') or 6379)
|
||||
redis['password'] = os.environ.get('REDIS_SERVER_PASSWORD', '')
|
||||
redis['username'] = os.environ.get('REDIS_SERVER_USERNAME', '')
|
||||
redis['db'] = int(os.environ.get('REDIS_SERVER_DATABASE') or 0)
|
||||
config_path.write_text(json.dumps(data, indent=2))
|
||||
PY
|
||||
log "Configured Redis endpoint"
|
||||
}
|
||||
|
||||
configure_mail() {
|
||||
if [ -z "${CLOUDRON_MAIL_SMTP_SERVER:-}" ]; then
|
||||
log "Cloudron mail addon not configured, skipping SMTP setup"
|
||||
local host=""
|
||||
local port=""
|
||||
local user=""
|
||||
local password=""
|
||||
local sender=""
|
||||
local ignore_tls="false"
|
||||
|
||||
if [ -n "${CLOUDRON_EMAIL_SMTP_SERVER:-}" ]; then
|
||||
host="$CLOUDRON_EMAIL_SMTP_SERVER"
|
||||
port="${CLOUDRON_EMAIL_SMTPS_PORT:-${CLOUDRON_EMAIL_SMTP_PORT:-587}}"
|
||||
user="${CLOUDRON_EMAIL_SMTP_USERNAME:-}"
|
||||
password="${CLOUDRON_EMAIL_SMTP_PASSWORD:-}"
|
||||
sender="${CLOUDRON_EMAIL_FROM:-AFFiNE <no-reply@cloudron.local>}"
|
||||
ignore_tls="${MAILER_IGNORE_TLS:-true}"
|
||||
log "Configuring SMTP using Cloudron email addon"
|
||||
elif [ -n "${CLOUDRON_MAIL_SMTP_SERVER:-}" ]; then
|
||||
host="$CLOUDRON_MAIL_SMTP_SERVER"
|
||||
port="${CLOUDRON_MAIL_SMTP_PORT:-587}"
|
||||
user="${CLOUDRON_MAIL_SMTP_USERNAME:-}"
|
||||
password="${CLOUDRON_MAIL_SMTP_PASSWORD:-}"
|
||||
sender="${CLOUDRON_MAIL_FROM:-AFFiNE <no-reply@cloudron.local>}"
|
||||
ignore_tls="${MAILER_IGNORE_TLS:-false}"
|
||||
if [ -n "${CLOUDRON_MAIL_SMTP_SECURE:-}" ]; then
|
||||
case "${CLOUDRON_MAIL_SMTP_SECURE,,}" in
|
||||
true|1|yes) port="${CLOUDRON_MAIL_SMTP_PORT:-465}" ;;
|
||||
esac
|
||||
fi
|
||||
log "Configuring SMTP using Cloudron sendmail addon"
|
||||
else
|
||||
log "Cloudron mail/email addon not configured, skipping SMTP setup"
|
||||
return
|
||||
fi
|
||||
export MAILER_HOST="$CLOUDRON_MAIL_SMTP_SERVER"
|
||||
export MAILER_PORT="${CLOUDRON_MAIL_SMTP_PORT:-587}"
|
||||
export MAILER_USER="${CLOUDRON_MAIL_SMTP_USERNAME:-}"
|
||||
export MAILER_PASSWORD="${CLOUDRON_MAIL_SMTP_PASSWORD:-}"
|
||||
export MAILER_SENDER="${CLOUDRON_MAIL_FROM:-AFFiNE <no-reply@cloudron.local>}"
|
||||
|
||||
export MAILER_HOST="$host"
|
||||
export MAILER_PORT="$port"
|
||||
export MAILER_USER="$user"
|
||||
export MAILER_PASSWORD="$password"
|
||||
export MAILER_SENDER="${sender:-AFFiNE <no-reply@cloudron.local>}"
|
||||
export MAILER_SERVERNAME="${MAILER_SERVERNAME:-AFFiNE Server}"
|
||||
export MAILER_IGNORE_TLS="$ignore_tls"
|
||||
|
||||
record_env_var MAILER_HOST "$MAILER_HOST"
|
||||
record_env_var MAILER_PORT "$MAILER_PORT"
|
||||
record_env_var MAILER_USER "$MAILER_USER"
|
||||
record_env_var MAILER_PASSWORD "$MAILER_PASSWORD"
|
||||
record_env_var MAILER_SENDER "$MAILER_SENDER"
|
||||
record_env_var MAILER_SERVERNAME "$MAILER_SERVERNAME"
|
||||
record_env_var MAILER_IGNORE_TLS "$MAILER_IGNORE_TLS"
|
||||
log "Configured SMTP relay"
|
||||
}
|
||||
|
||||
@@ -120,38 +276,32 @@ PY
|
||||
export AFFINE_SERVER_HTTPS=false
|
||||
fi
|
||||
fi
|
||||
export AFFINE_INDEXER_ENABLED=${AFFINE_INDEXER_ENABLED:-false}
|
||||
record_env_var AFFINE_SERVER_EXTERNAL_URL "${AFFINE_SERVER_EXTERNAL_URL:-}"
|
||||
record_env_var AFFINE_SERVER_HOST "${AFFINE_SERVER_HOST:-}"
|
||||
record_env_var AFFINE_SERVER_HTTPS "${AFFINE_SERVER_HTTPS:-}"
|
||||
}
|
||||
|
||||
write_runtime_env() {
|
||||
: > "$ENV_EXPORT_FILE"
|
||||
local vars=(
|
||||
DATABASE_URL
|
||||
REDIS_SERVER_HOST
|
||||
REDIS_SERVER_PORT
|
||||
REDIS_SERVER_PASSWORD
|
||||
REDIS_SERVER_DATABASE
|
||||
REDIS_SERVER_USERNAME
|
||||
REDIS_URL
|
||||
REDIS_SERVER_URL
|
||||
MAILER_HOST
|
||||
MAILER_PORT
|
||||
MAILER_USER
|
||||
MAILER_PASSWORD
|
||||
MAILER_SENDER
|
||||
MAILER_SERVERNAME
|
||||
AFFINE_SERVER_EXTERNAL_URL
|
||||
AFFINE_SERVER_HOST
|
||||
AFFINE_SERVER_HTTPS
|
||||
AFFINE_INDEXER_ENABLED
|
||||
)
|
||||
local var value
|
||||
for var in "${vars[@]}"; do
|
||||
value="${!var-}"
|
||||
if [ -n "$value" ]; then
|
||||
printf '%s=%q\n' "$var" "$value" >> "$ENV_EXPORT_FILE"
|
||||
fi
|
||||
done
|
||||
configure_indexer() {
|
||||
export AFFINE_INDEXER_ENABLED=true
|
||||
export AFFINE_INDEXER_SEARCH_PROVIDER=${AFFINE_INDEXER_SEARCH_PROVIDER:-manticoresearch}
|
||||
export AFFINE_INDEXER_SEARCH_ENDPOINT=${AFFINE_INDEXER_SEARCH_ENDPOINT:-$MANTICORE_HTTP_ENDPOINT}
|
||||
record_env_var AFFINE_INDEXER_ENABLED "$AFFINE_INDEXER_ENABLED"
|
||||
record_env_var AFFINE_INDEXER_SEARCH_PROVIDER "$AFFINE_INDEXER_SEARCH_PROVIDER"
|
||||
record_env_var AFFINE_INDEXER_SEARCH_ENDPOINT "$AFFINE_INDEXER_SEARCH_ENDPOINT"
|
||||
|
||||
python3 - <<'PY'
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
config_path = Path(os.environ['APP_DATA_DIR']) / 'config' / 'config.json'
|
||||
data = json.loads(config_path.read_text())
|
||||
indexer = data.setdefault('indexer', {})
|
||||
indexer['enabled'] = True
|
||||
indexer['provider.type'] = os.environ.get('AFFINE_INDEXER_SEARCH_PROVIDER', 'manticoresearch')
|
||||
indexer['provider.endpoint'] = os.environ.get('AFFINE_INDEXER_SEARCH_ENDPOINT', 'http://127.0.0.1:9308')
|
||||
config_path.write_text(json.dumps(data, indent=2))
|
||||
PY
|
||||
log "Configured indexer endpoint"
|
||||
}
|
||||
|
||||
configure_auth() {
|
||||
@@ -160,6 +310,7 @@ configure_auth() {
|
||||
python3 - <<'PY'
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
from pathlib import Path
|
||||
config_path = Path(os.environ['APP_DATA_DIR']) / 'config' / 'config.json'
|
||||
data = json.loads(config_path.read_text())
|
||||
@@ -168,9 +319,34 @@ providers = auth.setdefault('providers', {})
|
||||
oidc = providers.setdefault('oidc', {})
|
||||
oidc['clientId'] = os.environ.get('CLOUDRON_OIDC_CLIENT_ID', '')
|
||||
oidc['clientSecret'] = os.environ.get('CLOUDRON_OIDC_CLIENT_SECRET', '')
|
||||
oidc['issuer'] = os.environ.get('CLOUDRON_OIDC_ISSUER') or os.environ.get('CLOUDRON_OIDC_DISCOVERY_URL', '')
|
||||
issuer = os.environ.get('CLOUDRON_OIDC_ISSUER') or ''
|
||||
discovery = os.environ.get('CLOUDRON_OIDC_DISCOVERY_URL') or ''
|
||||
resolved_issuer = issuer
|
||||
if not resolved_issuer and discovery:
|
||||
resolved_issuer = re.sub(r'/\.well-known.*$', '', discovery)
|
||||
if not resolved_issuer:
|
||||
resolved_issuer = discovery
|
||||
oidc['issuer'] = resolved_issuer
|
||||
default_scope = os.environ.get('AFFINE_OIDC_SCOPE', 'openid profile email')
|
||||
default_claims = {
|
||||
'claim_id': os.environ.get('AFFINE_OIDC_CLAIM_ID', 'preferred_username'),
|
||||
'claim_email': os.environ.get('AFFINE_OIDC_CLAIM_EMAIL', 'email'),
|
||||
'claim_name': os.environ.get('AFFINE_OIDC_CLAIM_NAME', 'name'),
|
||||
}
|
||||
args = oidc.setdefault('args', {})
|
||||
args.setdefault('scope', 'openid profile email')
|
||||
args['scope'] = default_scope
|
||||
for key, value in default_claims.items():
|
||||
args.setdefault(key, value)
|
||||
oauth = data.setdefault('oauth', {})
|
||||
oauth_providers = oauth.setdefault('providers', {})
|
||||
oauth_oidc = oauth_providers.setdefault('oidc', {})
|
||||
oauth_oidc['clientId'] = oidc['clientId']
|
||||
oauth_oidc['clientSecret'] = oidc['clientSecret']
|
||||
oauth_oidc['issuer'] = resolved_issuer
|
||||
oauth_args = oauth_oidc.setdefault('args', {})
|
||||
oauth_args['scope'] = default_scope
|
||||
for key, value in default_claims.items():
|
||||
oauth_args.setdefault(key, value)
|
||||
config_path.write_text(json.dumps(data, indent=2))
|
||||
PY
|
||||
log "Enabled Cloudron OIDC for AFFiNE"
|
||||
@@ -199,13 +375,16 @@ PY
|
||||
main() {
|
||||
export HOME="$APP_HOME_DIR"
|
||||
prepare_data_dirs
|
||||
prepare_manticore
|
||||
prepare_runtime_build_dir
|
||||
configure_database
|
||||
ensure_pgvector_extension
|
||||
configure_redis
|
||||
configure_mail
|
||||
configure_server_metadata
|
||||
configure_indexer
|
||||
update_server_config
|
||||
configure_auth
|
||||
write_runtime_env
|
||||
chown -R cloudron:cloudron "$APP_DATA_DIR" "$APP_HOME_DIR"
|
||||
log "Starting supervisor"
|
||||
exec /usr/bin/supervisord -c "$APP_CODE_DIR/supervisord.conf"
|
||||
|
||||
@@ -1,7 +1,12 @@
|
||||
[unix_http_server]
|
||||
file=/run/supervisor.sock
|
||||
chmod=0700
|
||||
chown=root:root
|
||||
|
||||
[supervisord]
|
||||
nodaemon=true
|
||||
user=root
|
||||
logfile=/dev/null
|
||||
logfile=/run/supervisord.log
|
||||
pidfile=/run/supervisord.pid
|
||||
|
||||
[program:nginx]
|
||||
@@ -15,6 +20,25 @@ stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
stopsignal=QUIT
|
||||
|
||||
[program:manticore]
|
||||
command=/app/code/run-manticore.sh
|
||||
autostart=true
|
||||
autorestart=true
|
||||
startsecs=5
|
||||
priority=12
|
||||
user=cloudron
|
||||
stdout_logfile=/dev/stdout
|
||||
stdout_logfile_maxbytes=0
|
||||
stderr_logfile=/dev/stderr
|
||||
stderr_logfile_maxbytes=0
|
||||
stopsignal=TERM
|
||||
|
||||
[supervisorctl]
|
||||
serverurl=unix:///run/supervisor.sock
|
||||
|
||||
[rpcinterface:supervisor]
|
||||
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
|
||||
|
||||
[program:affine]
|
||||
command=/app/code/run-affine.sh
|
||||
autostart=true
|
||||
|
||||
@@ -2,5 +2,38 @@
|
||||
"$schema": "https://github.com/toeverything/AFFiNE/releases/latest/download/config.schema.json",
|
||||
"server": {
|
||||
"name": "AFFiNE Self Hosted Server"
|
||||
},
|
||||
"copilot": {
|
||||
"enabled": true,
|
||||
"scenarios": {
|
||||
"override_enabled": true,
|
||||
"scenarios": {
|
||||
"audio_transcribing": "gemini-2.5-flash",
|
||||
"chat": "gemini-2.5-flash",
|
||||
"embedding": "gemini-embedding-001",
|
||||
"image": "gpt-image-1",
|
||||
"rerank": "gpt-4.1",
|
||||
"coding": "claude-sonnet-4-5@20250929",
|
||||
"complex_text_generation": "gpt-4o-2024-08-06",
|
||||
"quick_decision_making": "gpt-5-mini",
|
||||
"quick_text_generation": "gemini-2.5-flash",
|
||||
"polish_and_summarize": "gemini-2.5-flash"
|
||||
}
|
||||
},
|
||||
"providers.openai": {
|
||||
"apiKey": "sk-provide-openai-key-here",
|
||||
"baseURL": "https://api.openai.com/v1"
|
||||
},
|
||||
"providers.anthropic": {
|
||||
"apiKey": "sk-ant-provide-anthropic-key-here",
|
||||
"baseURL": "https://api.anthropic.com/v1"
|
||||
},
|
||||
"providers.gemini": {
|
||||
"apiKey": "provide-gemini-key-here",
|
||||
"baseURL": "https://generativelanguage.googleapis.com/v1beta"
|
||||
},
|
||||
"exa": {
|
||||
"key": "provide-exa-key-or-leave-empty"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user