aurweb/docker/scripts/run-pytests.sh
Kevin Morris 7bfc2bf9b4
fix(FastAPI): Improve sqlite testing speed
This commit adds a new Arch dependency: `libeatmydata`, which
provides the `eatmydata` executable that stubs out fsync() operations.
We use `eatmydata` to run our sharness and pytests in Docker now.

With `autocommit=True`, required by SQLAlchemy to keep the
session up to date with external DB modifications, many fsync
calls are used in the SQLite backend; especially because we're wiping
and creating records in every DB-bound test.

**Before:**

- mysql: 1m42s (elapsed during pytest run)
- sqlite: 3m06s (elapsed during pytest run)

**After:**

- mysql: 1m40s (elapsed during pytest run)
- sqlite: 1m50s (elapsed during pytest run)

Shout out to @klausenbusk, who suggested this as a possible fix,
and it was. Thanks, Kristian!

Closes #120

Signed-off-by: Kevin Morris <kevr@0cost.org>
2021-10-03 15:59:52 -07:00

42 lines
1 KiB
Bash
Executable file

#!/bin/bash
set -eou pipefail
COVERAGE=1
PARAMS=()
while [ $# -ne 0 ]; do
key="$1"
case "$key" in
--no-coverage)
COVERAGE=0
shift
;;
-*)
echo "usage: $0 [--no-coverage] targets ..."
exit 1
;;
*)
PARAMS+=("$key")
shift
;;
esac
done
# Initialize the new database; ignore errors.
python -m aurweb.initdb 2>/dev/null || \
(echo "Error: aurweb.initdb failed; already initialized?" && /bin/true)
# Run pytest with optional targets in front of it.
eatmydata -- make -C test "${PARAMS[@]}" pytest
# By default, report coverage and move it into cache.
if [ $COVERAGE -eq 1 ]; then
make -C test coverage
# /cache is mounted as a volume. Copy coverage into it.
# Users can then sanitize the coverage locally in their
# aurweb root directory: ./util/fix-coverage ./cache/.coverage
rm -f /cache/.coverage
cp -v .coverage /cache/.coverage
chmod 666 /cache/.coverage
fi