Compare commits

..

18 Commits

Author SHA1 Message Date
Geoff Bourne
957d11655b Auto-merging via docker-versions-create 2020-11-25 15:56:43 -06:00
Geoff Bourne
50c22ac469 Auto-merging via docker-versions-create 2020-08-09 13:07:31 -05:00
Geoff Bourne
81e1cd8cfd Auto-merging via docker-versions-create 2020-07-26 08:30:21 -05:00
Geoff Bourne
8387e9bd26 Auto-merging via docker-versions-create 2020-07-18 18:40:48 -05:00
Geoff Bourne
4afdb289c0 Auto-merging via docker-versions-create 2020-07-11 13:13:48 -05:00
Geoff Bourne
fe637353d8 Auto-merging via docker-versions-create 2020-07-10 17:11:51 -05:00
Geoff Bourne
9160501f0a Auto-merging via docker-versions-create 2020-07-04 14:58:15 -05:00
Geoff Bourne
6eba5062ec Auto-merging via docker-versions-create 2020-06-20 15:45:09 -05:00
Geoff Bourne
fc96723db1 Auto-merging via docker-versions-create 2020-06-19 13:27:05 -05:00
Geoff Bourne
8d3e461b4c Auto-merging via docker-versions-create 2020-05-20 08:15:12 -05:00
Geoff Bourne
fd73417411 Auto-merging via docker-versions-create 2020-05-02 09:34:30 -05:00
Geoff Bourne
1207b9a685 Auto-merging via docker-versions-create 2020-04-25 12:11:09 -05:00
Geoff Bourne
e6259bfd9d Auto-merging via docker-versions-create 2020-04-17 21:29:12 -05:00
Geoff Bourne
b5e7b952e4 Auto-merging via docker-versions-create 2020-04-11 08:51:52 -05:00
Geoff Bourne
ac5b960182 Auto-merging via docker-versions-create 2020-04-10 11:08:59 -05:00
Geoff Bourne
3299dec733 Auto-merging via docker-versions-create 2020-04-03 13:31:44 -05:00
Geoff Bourne
578f06087f Changed JVM_XX_OPTS to use default GC 2020-03-13 10:55:19 -05:00
Geoff Bourne
fb364e8301 Prepared adopt13 branch 2020-03-02 21:08:22 -06:00
23 changed files with 203 additions and 311 deletions

View File

@@ -9,7 +9,6 @@ on:
- adopt11
- adopt13
- adopt14
- adopt15
tags:
- "[0-9]+.[0-9]+.[0-9]+"
- "[0-9]+.[0-9]+.[0-9]+-openj9"
@@ -17,7 +16,6 @@ on:
- "[0-9]+.[0-9]+.[0-9]+-adopt11"
- "[0-9]+.[0-9]+.[0-9]+-adopt13"
- "[0-9]+.[0-9]+.[0-9]+-adopt14"
- "[0-9]+.[0-9]+.[0-9]+-adopt15"
jobs:
test:

View File

@@ -1,4 +1,4 @@
FROM openjdk:8u212-jre-alpine
FROM adoptopenjdk/openjdk13:alpine-jre
LABEL org.opencontainers.image.authors="Geoff Bourne <itzgeoff@gmail.com>"
@@ -70,12 +70,11 @@ COPY log4j2.xml /tmp/log4j2.xml
WORKDIR /data
ENV UID=1000 GID=1000 \
JVM_XX_OPTS="-XX:+UseG1GC" MEMORY="1G" \
MEMORY="1G" \
TYPE=VANILLA VERSION=LATEST FORGEVERSION=RECOMMENDED SPONGEBRANCH=STABLE SPONGEVERSION= FABRICVERSION=LATEST LEVEL=world \
PVP=true DIFFICULTY=easy ENABLE_RCON=true RCON_PORT=25575 RCON_PASSWORD=minecraft \
LEVEL_TYPE=DEFAULT SERVER_PORT=25565 ONLINE_MODE=TRUE SERVER_NAME="Dedicated Server" \
ENABLE_AUTOPAUSE=false AUTOPAUSE_TIMEOUT_EST=3600 AUTOPAUSE_TIMEOUT_KN=120 AUTOPAUSE_TIMEOUT_INIT=600 \
AUTOPAUSE_PERIOD=10 AUTOPAUSE_KNOCK_INTERFACE=eth0
ENABLE_AUTOPAUSE=false AUTOPAUSE_TIMEOUT_EST=3600 AUTOPAUSE_TIMEOUT_KN=120 AUTOPAUSE_TIMEOUT_INIT=600 AUTOPAUSE_PERIOD=10
COPY start* /
COPY health.sh /

View File

@@ -146,15 +146,13 @@ To use a different version of Java, please use a docker tag to run your Minecraf
| Tag name | Description | Linux |
| -------------- | ------------------------------------------- | ------------ |
| latest | **Default**. Uses Java version 8 | Alpine Linux |
| adopt15 | Uses Java version 15 from AdoptOpenJDK | Alpine Linux |
| adopt14 | Uses Java version 14 from AdoptOpenJDK | Alpine Linux |
| adopt13 | Uses Java version 13 from AdoptOpenJDK | Alpine Linux |
| adopt11 | Uses Java version 11 from AdoptOpenJDK | Alpine Linux |
| latest | **Default**. Uses Java version 8 update 212 | Alpine Linux |
| adopt14 | Uses Java version 14 latest update | Alpine Linux |
| adopt13 | Uses Java version 13 latest update | Alpine Linux |
| adopt11 | Uses Java version 11 latest update | Alpine Linux |
| openj9 | Uses Eclipse OpenJ9 JVM | Alpine Linux |
| openj9-nightly | Uses Eclipse OpenJ9 JVM testing builds | Alpine Linux |
| multiarch | Uses Java version 8 latest update | Debian Linux |
| multiarch-latest | Uses Java version 15 latest update | Debian Linux |
For example, to use a Java version 13:
@@ -182,22 +180,22 @@ healthy
Some orchestration systems, such as Portainer, don't allow for disabling the default `HEALTHCHECK` declared by this image. In those cases you can approximate the disabling of healthchecks by setting the environment variable `DISABLE_HEALTHCHECK` to `true`.
## Autopause
## Autopause (experimental)
### Description
> EXPERIMENTAL: this feature only works with default bridge networking using official Docker distributions. Host networking and container management software, such as Portainer, and NAS solutions do not seem to provide compatible networking.
There are various bug reports on [Mojang](https://bugs.mojang.com) about high CPU usage of servers with newer versions, even with few or no clients connected (e.g. [this one](https://bugs.mojang.com/browse/MC-149018), in fact the functionality is based on [this comment in the thread](https://bugs.mojang.com/browse/MC-149018?focusedCommentId=593606&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-593606)).
An autopause functionality has been added to this image to monitor whether clients are connected to the server. If for a specified time no client is connected, the Java process is stopped. When knocking on the server port (e.g. by the ingame Multiplayer server overview), the process is resumed. The experience for the client does not change.
Of course, even loaded chunks are not ticked when the process is stopped.
From the server's point of view, the pausing causes a single tick to take as long as the process is stopped, so the server watchdog might intervene after the process is continued, possibly forcing a container restart. To prevent this, ensure that the `max-tick-time` in the `server.properties` file is set correctly. Non-vanilla versions might have their own configuration file, you might have to disable their watchdogs separately (e.g. PAPER Servers).
From the server's point of view, the pausing causes a single tick to take as long as the process is stopped, so the server watchdog might intervene after the process is continued, possibly forcing a container restart. To prevent this, ensure that the `max-tick-time` in the `server.properties` file is set correctly.
On startup the `server.properties` file is checked and, if applicable, a warning is printed to the terminal. When the server is created (no data available in the persistent directory), the properties file is created with the Watchdog disabled.
The utility used to wake the server (`knock(d)`) works at network interface level. So the correct interface has to be set using the `AUTOPAUSE_KNOCK_INTERFACE` variable when using non-default networking environments (e.g. host-networking, Portainer oder NAS solutions). See the description of the variable below.
A starting, example compose file has been provided in [examples/docker-compose-autopause.yml](examples/docker-compose-autopause.yml).
### Enabling Autopause
@@ -208,7 +206,7 @@ Enable the Autopause functionality by setting:
-e ENABLE_AUTOPAUSE=TRUE
```
The following environment variables define the behaviour of auto-pausing:
There are 4 more environment variables that define the behaviour:
* `AUTOPAUSE_TIMEOUT_EST`, default `3600` (seconds)
describes the time between the last client disconnect and the pausing of the process (read as timeout established)
* `AUTOPAUSE_TIMEOUT_INIT`, default `600` (seconds)
@@ -217,8 +215,6 @@ describes the time between server start and the pausing of the process, when no
describes the time between knocking of the port (e.g. by the main menu ping) and the pausing of the process, when no client connects inbetween (read as timeout knocked)
* `AUTOPAUSE_PERIOD`, default `10` (seconds)
describes period of the daemonized state machine, that handles the pausing of the process (resuming is done independently)
* `AUTOPAUSE_KNOCK_INTERFACE`, default `eth0`
<br>Describes the interface passed to the `knockd` daemon. If the default interface does not work, run the `ifconfig` command inside the container and derive the interface receiving the incoming connection from its output. The passed interface must exist inside the container. Using the loopback interface (`lo`) does likely not yield the desired results.
## Deployment Templates and Examples
@@ -305,17 +301,13 @@ or downloading a world with the `WORLD` option.
There are two additional volumes that can be mounted; `/mods` and `/config`.
Any files in either of these filesystems will be copied over to the main
`/data` filesystem before starting Minecraft. If you want old mods to be removed as the `/mods` content is updated, then add `-e REMOVE_OLD_MODS=TRUE`. If you are running a `BUKKIT` distribution this will affect all files inside the `plugins/` directory. You can fine tune the removal process by specifing the `REMOVE_OLD_MODS_INCLUDE` and `REMOVE_OLD_MODS_EXCLUDE` variables. By default everything will be removed. You can also specify the `REMOVE_OLD_MODS_DEPTH` (default 16) variable to only delete files up to a certain level.
> For example: `-e REMOVE_OLD_MODS=TRUE -e REMOVE_OLD_MODS_INCLUDE="*.jar" -e REMOVE_OLD_MODS_DEPTH=1` will remove all old jar files that are directly inside the `plugins/` or `mods/` directory.
`/data` filesystem before starting Minecraft. If you want old mods to be removed as the `/mods` content is updated, then add `-e REMOVE_OLD_MODS=TRUE`.
This works well if you want to have a common set of modules in a separate
location, but still have multiple worlds with different server requirements
in either persistent volumes or a downloadable archive.
You can specify the destination of the configs that are located inside the `/config` mount by setting the `COPY_CONFIG_DEST` variable. The configs are copied recursivly to the `/data/config` directory by default. If a file was updated directly inside the `/data/*` directoy and is newer than the file in the `/config/*` mount it will not be overriden.
> For example: `-v ./config:/config -e COPY_CONFIG_DEST=/data` will allow you to copy over your `bukkit.yml` and so on directly into the server directory.
### Replacing variables inside configs
@@ -539,12 +531,6 @@ The following example uses `/modpacks` as the container path as the pre-download
-e CF_SERVER_MOD=/modpacks/SkyFactory_4_Server_4.1.0.zip \
-p 25565:25565 -e EULA=TRUE --name mc itzg/minecraft-server
#### Modpack data directory
By default, CurseForge modpacks are expanded into the sub-directory `/data/FeedTheBeast` and executed from there. (The default location was chosen for legacy reasons, when Curse and FTB were maintained together.)
The directory can be changed by setting `CF_BASE_DIR`, such as `-e CF_BASE_DIR=/data`.
#### Buggy start scripts
Some modpacks have buggy or overly complex start scripts. You can avoid using the bundled start script and use this image's standard server-starting logic by adding `-e USE_MODPACK_START_SCRIPT=false`.
@@ -1135,4 +1121,4 @@ To run this image on a RaspberryPi 3 B+, 4, or newer, use the image tag
itzg/minecraft-server:multiarch
> NOTE: you may need to lower the memory allocation, such as `-e MEMORY=750m`
> NOTE: you may need to lower the memory allocation, such as `-e MEMORY=750m`

View File

@@ -1,7 +1,7 @@
#!/bin/bash
#set -x
# Use this variable to indicate a list of branches that docker hub is watching
branches_list=('openj9' 'openj9-nightly' 'adopt11' 'adopt13' 'adopt14' 'adopt15' 'multiarch' 'multiarch-latest')
branches_list=('openj9' 'openj9-nightly' 'adopt11' 'adopt13' 'adopt14' 'multiarch' 'multiarch-latest')
function TrapExit {
echo "Checking out back in master"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

View File

@@ -1,3 +1,3 @@
Place server [modpacks downloaded from CurseForge](https://www.curseforge.com/minecraft/modpacks) in this directory.
Please server [modpacks downloaded from CurseForge](https://www.curseforge.com/minecraft/modpacks) in this directory.
The example [`docker-compose-curseforge.yml`](../docker-compose-curseforge.yml) references a modpack downloaded from <https://www.curseforge.com/minecraft/modpacks/skyfactory-4/files/2787018>.

View File

@@ -2,48 +2,23 @@
. /autopause/autopause-fcns.sh
. ${SCRIPTS:-/}start-utils
. /start-utils
autopause_error_loop() {
logAutopause "Available interfaces within the docker container:"
INTERFACES=$(echo /sys/class/net/*)
INTERFACES=${INTERFACES//\/sys\/class\/net\//}
logAutopause " $INTERFACES"
logAutopause "Please set the environment variable AUTOPAUSE_KNOCK_INTERFACE to the interface that handles incoming connections."
logAutopause "If unsure which interface to choose, run the ifconfig command in the container."
logAutopause "Autopause failed to initialize. This log entry will be printed every 30 minutes."
sudo /usr/sbin/knockd -c /tmp/knockd-config.cfg -d
if [ $? -ne 0 ] ; then
while :
do
sleep 1800
logAutopause "Autopause failed to initialize."
if [[ -n $(ps -o comm | grep java) ]] ; then
break
fi
sleep 0.1
done
}
# wait for java process to be started
while :
do
if java_process_exists ; then
break
fi
sleep 0.1
done
# check for interface existence
if [[ -z "$AUTOPAUSE_KNOCK_INTERFACE" ]] ; then
logAutopause "AUTOPAUSE_KNOCK_INTERFACE variable must not be empty!"
autopause_error_loop
fi
if ! [[ -d "/sys/class/net/$AUTOPAUSE_KNOCK_INTERFACE" ]] ; then
logAutopause "Selected interface \"$AUTOPAUSE_KNOCK_INTERFACE\" does not exist!"
autopause_error_loop
fi
sudo /usr/sbin/knockd -c /tmp/knockd-config.cfg -d -i "$AUTOPAUSE_KNOCK_INTERFACE"
if [ $? -ne 0 ] ; then
logAutopause "Failed to start knockd daemon."
logAutopause "Probable cause: Unable to attach to interface \"$AUTOPAUSE_KNOCK_INTERFACE\"."
autopause_error_loop
logAutopause "Possible cause: docker's host network mode."
logAutopause "Recreate without host mode or disable autopause functionality."
logAutopause "Stopping server."
killall -SIGTERM java
exit 1
fi
STATE=INIT

View File

@@ -8,10 +8,6 @@ java_running() {
[[ $( ps -a -o stat,comm | grep 'java' | awk '{ print $1 }') =~ ^S.*$ ]]
}
java_process_exists() {
[[ -n "$(ps -a -o comm | grep 'java')" ]]
}
rcon_client_exists() {
[[ -n "$(ps -a -o comm | grep 'rcon-cli')" ]]
}

View File

@@ -17,5 +17,5 @@ if [[ $( ps -a -o stat,comm | grep 'java' | awk '{ print $1 }') =~ ^S.*$ ]] ; th
# finally pause the process
logAutopauseAction "Pausing Java process"
pkill -STOP java
killall -q -STOP java
fi

View File

@@ -4,5 +4,5 @@
if [[ $( ps -a -o stat,comm | grep 'java' | awk '{ print $1 }') =~ ^T.*$ ]] ; then
logAutopauseAction "Knocked, resuming Java process"
pkill -CONT java
killall -q -CONT java
fi

View File

@@ -1,2 +1,2 @@
%minecraft ALL=(ALL) NOPASSWD:/usr/bin/pkill
%minecraft ALL=(ALL) NOPASSWD:/usr/bin/killall
%minecraft ALL=(ALL) NOPASSWD:/usr/sbin/knockd

View File

@@ -46,15 +46,10 @@ if ! [[ $AUTOPAUSE_TIMEOUT_INIT =~ ^[0-9]+$ ]] ; then
export AUTOPAUSE_TIMEOUT_INIT
log "Warning: AUTOPAUSE_TIMEOUT_INIT is not numeric, set to 600 (seconds)"
fi
if [[ "$AUTOPAUSE_KNOCK_INTERFACE" == "lo" ]] ; then
log "Warning: AUTOPAUSE_KNOCK_INTERFACE is set to the local loopback interface."
log " This is not advisable, as incoming connections are likely not picked up there."
log " Continuing with this setting."
fi
if [[ -n "$MAX_TICK_TIME" && "$MAX_TICK_TIME" != "-1" ]] ; then
if [[ -n $MAX_TICK_TIME ]] ; then
log "Warning: MAX_TICK_TIME is non-default, for autopause to work properly, this check should be disabled (-1 for versions >= 1.8.1)"
elif [[ -z "$MAX_TICK_TIME" ]] ; then
else
if versionLessThan 1.8.1; then
# 10 years
MAX_TICK_TIME=315360000000

View File

@@ -5,9 +5,7 @@ set -e
. ${SCRIPTS:-/}start-utils
isDebugging && set -x
: ${FTB_BASE_DIR:=${CF_BASE_DIR:-/data/FeedTheBeast}}
export FTB_BASE_DIR
export FTB_BASE_DIR=/data/FeedTheBeast
legacyJavaFixerUrl=https://ftb.forgecdn.net/FTB2/maven/net/minecraftforge/lex/legacyjavafixer/1.0/legacyjavafixer-1.0.jar
export TYPE=FEED-THE-BEAST

View File

@@ -17,16 +17,12 @@ if isURL ${CUSTOM_SERVER}; then
fi
elif [[ -f ${CUSTOM_SERVER} ]]; then
export SERVER=${CUSTOM_SERVER}
elif [[ ${GENERIC_PACK} ]]; then
log "Using custom server jar from generic pack at ${CUSTOM_SERVER} ..."
log "Using custom server jar at ${CUSTOM_SERVER} ..."
export SERVER=${CUSTOM_SERVER}
else
log "CUSTOM_SERVER is not properly set to a URL or existing jar file"
exit 2
fi
export SKIP_LOG4J_CONFIG=true

View File

@@ -4,66 +4,54 @@
set -o pipefail
isDebugging && set -x
if [[ $PAPER_DOWNLOAD_URL ]]; then
export SERVER=$(getFilenameFromUrl "${PAPER_DOWNLOAD_URL}")
# PaperMC API v2 docs : https://papermc.io/api/docs/swagger-ui/index.html?configUrl=/api/openapi/swagger-config
if [ -f "$SERVER" ]; then
zarg=(-z "$SERVER")
fi
echo "Preparing custom PaperMC jar from $PAPER_DOWNLOAD_URL"
curl -fsSL -o "$SERVER" "${zarg[@]}" "${PAPER_DOWNLOAD_URL}"
else
# PaperMC API v2 docs : https://papermc.io/api/docs/swagger-ui/index.html?configUrl=/api/openapi/swagger-config
build=$(curl -fsSL "https://papermc.io/api/v2/projects/paper/versions/${VANILLA_VERSION}" -H "accept: application/json" \
| jq '.builds[-1]')
case $? in
0)
;;
22)
versions=$(curl -fsSL "https://papermc.io/api/v2/projects/paper" -H "accept: application/json")
if [[ $VERSION = LATEST ]]; then
VANILLA_VERSION=$(echo "$versions" | jq -r '.versions[-1]')
log "WARN: using ${VANILLA_VERSION} since that's the latest provided by PaperMC"
# re-execute the current script with the newly computed version
exec $0 "$@"
fi
log "ERROR: ${VANILLA_VERSION} is not published by PaperMC"
log " Set VERSION to one of the following: "
log " $(echo "$versions" | jq -r '.versions | join(", ")')"
exit 1
;;
*)
echo "ERROR: unknown error while looking up PaperMC version=${VANILLA_VERSION}"
exit 1
;;
esac
if [ $? != 0 ]; then
echo "ERROR: failed to lookup PaperMC build from version ${VANILLA_VERSION}"
build=$(curl -fsSL "https://papermc.io/api/v2/projects/paper/versions/${VANILLA_VERSION}" -H "accept: application/json" \
| jq '.builds[-1]')
case $? in
0)
;;
22)
versions=$(curl -fsSL "https://papermc.io/api/v2/projects/paper" -H "accept: application/json")
if [[ $VERSION = LATEST ]]; then
VANILLA_VERSION=$(echo "$versions" | jq -r '.versions[-1]')
log "WARN: using ${VANILLA_VERSION} since that's the latest provided by PaperMC"
# re-execute the current script with the newly computed version
exec $0 "$@"
fi
log "ERROR: ${VANILLA_VERSION} is not published by PaperMC"
log " Set VERSION to one of the following: "
log " $(echo "$versions" | jq -r '.versions | join(", ")')"
exit 1
fi
export SERVER=$(curl -fsSL "https://papermc.io/api/v2/projects/paper/versions/${VANILLA_VERSION}/builds/${build}" -H "accept: application/json" \
| jq -r '.downloads.application.name')
if [ $? != 0 ]; then
echo "ERROR: failed to lookup PaperMC download file from version=${VANILLA_VERSION} build=${build}"
;;
*)
echo "ERROR: unknown error while looking up PaperMC version=${VANILLA_VERSION}"
exit 1
fi
;;
esac
if [ $? != 0 ]; then
echo "ERROR: failed to lookup PaperMC build from version ${VANILLA_VERSION}"
exit 1
fi
if [ -f "$SERVER" ]; then
zarg=(-z "$SERVER")
fi
export SERVER=$(curl -fsSL "https://papermc.io/api/v2/projects/paper/versions/${VANILLA_VERSION}/builds/${build}" -H "accept: application/json" \
| jq -r '.downloads.application.name')
if [ $? != 0 ]; then
echo "ERROR: failed to lookup PaperMC download file from version=${VANILLA_VERSION} build=${build}"
exit 1
fi
log "Downloading PaperMC $VANILLA_VERSION (build $build) ..."
curl -fsSL -o "$SERVER" "${zarg[@]}" \
"https://papermc.io/api/v2/projects/paper/versions/${VANILLA_VERSION}/builds/${build}/downloads/${SERVER}" \
-H "accept: application/java-archive"
if [ $? != 0 ]; then
echo "ERROR: failed to download PaperMC from version=${VANILLA_VERSION} build=${build} download=${SERVER}"
exit 1
fi
if [ -f "$SERVER" ]; then
zarg=(-z "$SERVER")
fi
log "Downloading PaperMC $VANILLA_VERSION (build $build) ..."
curl -fsSL -o "$SERVER" "${zarg[@]}" \
"https://papermc.io/api/v2/projects/paper/versions/${VANILLA_VERSION}/builds/${build}/downloads/${SERVER}" \
-H "accept: application/java-archive"
if [ $? != 0 ]; then
echo "ERROR: failed to download PaperMC from version=${VANILLA_VERSION} build=${build} download=${SERVER}"
exit 1
fi
# Normalize on Spigot for downstream operations

View File

@@ -24,4 +24,4 @@ case "X$MODCONFIG" in
esac
fi
exec ${SCRIPTS:-/}start-finalSetupMounts $@
exec ${SCRIPTS:-/}start-finalSetupPlugins $@

View File

@@ -1,30 +1,18 @@
#!/bin/bash
set -e -o pipefail
set -e
. ${SCRIPTS:-/}start-utils
if isDebugging; then
set -x
fi
# CURSE_URL_BASE used in manifest downloads below
CURSE_URL_BASE=${CURSE_URL_BASE:-https://minecraft.curseforge.com/projects}
# Remove old mods/plugins
if isTrue ${REMOVE_OLD_MODS}; then
remove_mods_dest="/data/mods"
case ${TYPE} in
SPIGOT|BUKKIT|PAPER)
remove_mods_dest="/data/plugins"
;;
esac
# only try to remove existing mods dir
if [ -d "$remove_mods_dest" ]; then
log "Removing old mods in $remove_mods_dest..."
find $remove_mods_dest -mindepth 1 -maxdepth ${REMOVE_OLD_MODS_DEPTH:-16} -wholename "${REMOVE_OLD_MODS_INCLUDE:-*}" -not -wholename "${REMOVE_OLD_MODS_EXCLUDE}" -delete
if [ "$REMOVE_OLD_MODS" = "TRUE" ]; then
if [ "$TYPE" = "SPIGOT" ]; then
rm -rf /data/plugins/*
else
log "Directory $remove_mods_dest does not exist; removing nothing."
rm -rf /data/mods/*
fi
fi
@@ -34,7 +22,7 @@ if [[ "$MODPACK" ]]; then
if [[ "${MODPACK}" == *.zip ]]; then
downloadUrl="${MODPACK}"
else
downloadUrl=$(curl -Ls -o /dev/null -w %{effective_url} $MODPACK)
downloadUrl=$(curl -Ls -o /dev/null -w %{url_effective} $MODPACK)
if ! [[ $downloadUrl == *.zip ]]; then
log "ERROR Invalid URL given for MODPACK: $downloadUrl resolved from $MODPACK"
log " Must be HTTP or HTTPS and a ZIP file"
@@ -70,31 +58,39 @@ fi
# If supplied with a URL for a plugin download it.
if [[ "$MODS" ]]; then
if [ "$TYPE" = "SPIGOT" ]; then
out_dir=/data/plugins
else
out_dir=/data/mods
fi
mkdir -p "$out_dir"
for i in ${MODS//,/ }
do
if isURL $i; then
log "Downloading mod/plugin $i ..."
effective_url=$(resolveEffectiveUrl "$i")
if isValidFileURL jar "${effective_url}"; then
out_file=$(getFilenameFromUrl "${effective_url}")
if ! curl -fsSL -o "${out_dir}/$out_file" "${effective_url}"; then
log "ERROR: failed to download from $i into $out_dir"
exit 2
fi
if [[ $i == *.jar ]]; then
EFFECTIVE_MOD_URL=$i
else
log "ERROR: $effective_url resolved from $i is not a valid jar URL"
EFFECTIVE_MOD_URL=$(curl -Ls -o /dev/null -w %{url_effective} $i)
if ! [[ $EFFECTIVE_MOD_URL == *.jar ]]; then
log "ERROR Invalid URL given in MODS: $EFFECTIVE_MOD_URL resolved from $i"
log " Must be HTTP or HTTPS and a JAR file"
exit 1
fi
fi
log "Downloading mod/plugin via HTTP"
log " from $EFFECTIVE_MOD_URL ..."
if ! curl -sSL -o /tmp/${EFFECTIVE_MOD_URL##*/} $EFFECTIVE_MOD_URL; then
log "ERROR: failed to download from $EFFECTIVE_MOD_URL to /tmp/${EFFECTIVE_MOD_URL##*/}"
exit 2
fi
if [ "$TYPE" = "SPIGOT" ]; then
mkdir -p /data/plugins
mv /tmp/${EFFECTIVE_MOD_URL##*/} /data/plugins/${EFFECTIVE_MOD_URL##*/}
else
mkdir -p /data/mods
mv /tmp/${EFFECTIVE_MOD_URL##*/} /data/mods/${EFFECTIVE_MOD_URL##*/}
fi
rm -f /tmp/${EFFECTIVE_MOD_URL##*/}
else
log "ERROR Invalid URL given in MODS: $i"
exit 2
exit 1
fi
done
fi
@@ -104,7 +100,7 @@ if [[ "$MANIFEST" ]]; then
EFFECTIVE_MANIFEST_FILE=$MANIFEST
elif isURL "$MANIFEST"; then
EFFECTIVE_MANIFEST_FILE=/tmp/manifest.json
EFFECTIVE_MANIFEST_URL=$(curl -Ls -o /dev/null -w %{effective_url} $MANIFEST)
EFFECTIVE_MANIFEST_URL=$(curl -Ls -o /dev/null -w %{url_effective} $MANIFEST)
curl -Ls -o $EFFECTIVE_MANIFEST_FILE "$EFFECTIVE_MANIFEST_URL"
else
log "MANIFEST='$MANIFEST' is not a valid manifest url or location"
@@ -125,7 +121,7 @@ case "X$EFFECTIVE_MANIFEST_FILE" in
do
if [ ! -f $MOD_DIR/${p}_${f}.jar ]
then
redirect_url="$(curl -Ls -o /dev/null -w %{effective_url} ${CURSE_URL_BASE}/${p})"
redirect_url="$(curl -Ls -o /dev/null -w %{url_effective} ${CURSE_URL_BASE}/${p})"
url="$redirect_url/download/${f}/file"
log Downloading curseforge mod $url
# Manifest usually doesn't have mod names. Using id should be fine, tho
@@ -144,17 +140,17 @@ fi
if [[ "${GENERIC_PACK}" ]]; then
if isURL "${GENERIC_PACK}"; then
log "Downloading generic pack ..."
curl -fsSL -o /tmp/generic_pack.zip "${GENERIC_PACK}"
GENERIC_PACK=/tmp/generic_pack.zip
generic_pack_url=${GENERIC_PACK}
GENERIC_PACK=/tmp/$(basename ${generic_pack_url})
log "Downloading generic pack from ${generic_pack_url} ..."
curl -fsSL -o ${GENERIC_PACK} ${generic_pack_url}
fi
sum_file=/data/.generic_pack.sum
if ! sha256sum -c ${sum_file} -s 2> /dev/null; then
base_dir=/tmp/generic_pack_base
mkdir -p ${base_dir}
isDebugging && ls -l "${GENERIC_PACK}"
unzip -q -d ${base_dir} "${GENERIC_PACK}"
unzip -q -d ${base_dir} ${GENERIC_PACK}
if [ -f /data/manifest.txt ]; then
log "Manifest exists from older generic pack, cleaning up ..."
while read f; do
@@ -172,7 +168,7 @@ if [[ "${GENERIC_PACK}" ]]; then
for d in $(find ${base_dir} -type d); do mkdir -p "$(sed "s#${base_dir}#/data#" <<< $d)"; done
for f in $(find ${base_dir} -type f); do cp -f "$f" "$(sed "s#${base_dir}#/data#" <<< $f)"; done
rm -rf ${base_dir}
sha256sum "${GENERIC_PACK}" > ${sum_file}
sha256sum ${GENERIC_PACK} > ${sum_file}
fi
fi

View File

@@ -1,38 +0,0 @@
#!/bin/bash
. ${SCRIPTS:-/}start-utils
: ${PLUGINS_SYNC_UPDATE:=true}
isDebugging && set -x
if [ -d /plugins ]; then
case ${TYPE} in
SPIGOT|BUKKIT|PAPER|MAGMA)
mkdir -p /data/plugins
log "Copying plugins over..."
if isTrue ${PLUGINS_SYNC_UPDATE}; then
updateArg="--update"
fi
# Copy plugins over using rsync to allow deeply nested updates of plugins
rsync -a --out-format="update:%f:Last Modified %M" --prune-empty-dirs $updateArg /plugins /data
;;
esac
fi
# If any modules have been provided, copy them over
if [ -d /mods ]; then
log "Copying any mods over..."
mkdir -p /data/mods
rsync -a --out-format="update:%f:Last Modified %M" "${rsyncArgs[@]}" --prune-empty-dirs --update /mods /data
fi
: ${COPY_CONFIG_DEST:="/data/config"}
if [ -d /config ]; then
log "Copying any configs from /config to $COPY_CONFIG_DEST"
mkdir -p $COPY_CONFIG_DEST
rsync -a --out-format="update:%f:Last Modified %M" "${rsyncArgs[@]}" --prune-empty-dirs --update /config/ $COPY_CONFIG_DEST
fi
exec ${SCRIPTS:-/}start-finalSetupServerProperties $@

23
start-finalSetupPlugins Executable file
View File

@@ -0,0 +1,23 @@
#!/bin/bash
. ${SCRIPTS:-/}start-utils
: ${PLUGINS_SYNC_UPDATE:=true}
isDebugging && set -x
if [ -d /plugins ]; then
case ${TYPE} in
SPIGOT|BUKKIT|PAPER)
mkdir -p /data/plugins
log "Copying plugins over..."
if isTrue ${PLUGINS_SYNC_UPDATE}; then
updateArg="--update"
fi
# Copy plugins over using rsync to allow deeply nested updates of plugins
rsync -a --out-format="update:%f:Last Modified %M" --prune-empty-dirs $updateArg /plugins /data
;;
esac
fi
exec ${SCRIPTS:-/}start-finalSetupServerProperties $@

View File

@@ -13,7 +13,7 @@ function setServerProp {
var=${var,,} ;;
esac
log "Setting ${prop} to '${var}' in ${SERVER_PROPERTIES}"
sed -i "/^${prop}\s*=/ c ${prop}=${var//\\/\\\\}" "$SERVER_PROPERTIES"
sed -i "/^${prop}\s*=/ c ${prop}=${var}" "$SERVER_PROPERTIES"
else
log "Skip setting ${prop}"
fi

View File

@@ -25,12 +25,7 @@ if [[ "$WORLD" ]] && ( isTrue "${FORCE_WORLD_COPY}" || [ ! -d "$worldDest" ] );
mkdir -p /tmp/world-data
(cd /tmp/world-data && unzip -o -q "$zipSrc")
if [ "$TYPE" = "SPIGOT" ]; then
baseDirs=$(find /tmp/world-data -name "level.dat" -not -path "*_nether*" -not -path "*_the_end*" -exec dirname "{}" \;)
else
baseDirs=$(find /tmp/world-data -name "level.dat" -exec dirname "{}" \;)
fi
baseDirs=$(find /tmp/world-data -name "level.dat" -exec dirname "{}" \;)
count=$(echo "$baseDirs" | wc -l)
if [[ $count -gt 1 ]]; then
baseDir="$(echo "$baseDirs" | sed -n ${WORLD_INDEX:-1}p)"
@@ -43,11 +38,6 @@ if [[ "$WORLD" ]] && ( isTrue "${FORCE_WORLD_COPY}" || [ ! -d "$worldDest" ] );
exit 1
fi
rsync --remove-source-files --recursive --delete "$baseDir/" "$worldDest"
if [ "$TYPE" = "SPIGOT" ]; then
log "Copying end and nether ..."
[ -d "${baseDir}_nether" ] && rsync --remove-source-files --recursive --delete "${baseDir}_nether/" "${worldDest}_nether"
[ -d "${baseDir}_the_end" ] && rsync --remove-source-files --recursive --delete "${baseDir}_the_end/" "${worldDest}_the_end"
fi
else
log "Cloning world directory from $WORLD ..."
rsync --recursive --delete "${WORLD%/}"/ "$worldDest"

View File

@@ -49,6 +49,26 @@ for j in $JSON_FILES; do
fi
done
# If any modules have been provided, copy them over
if [ -d /mods ]; then
log "Copying any mods over..."
mkdir -p /data/mods
if isTrue "${REMOVE_OLD_MODS}"; then
rsyncArgs=(--delete)
fi
rsync -a --out-format="update:%f:Last Modified %M" "${rsyncArgs[@]}" --prune-empty-dirs --update /mods /data
fi
[ -d /data/config ] || mkdir /data/config
for c in /config/*
do
if [ -f "$c" ]; then
log Copying configuration $(basename "$c")
cp -rf "$c" /data/config
fi
done
EXTRA_ARGS=""
# Optional disable console
if versionLessThan 1.14 && [[ ${CONSOLE,,} = false ]]; then

View File

@@ -1,14 +1,8 @@
#!/bin/bash
function join_by() {
local d=$1
shift
echo -n "$1"
shift
printf "%s" "${@/#/$d}"
}
function join_by { local d=$1; shift; echo -n "$1"; shift; printf "%s" "${@/#/$d}"; }
function isURL() {
function isURL {
local value=$1
if [[ ${value:0:8} == "https://" || ${value:0:7} == "http://" ]]; then
@@ -18,114 +12,90 @@ function isURL() {
fi
}
function isValidFileURL() {
suffix=${1:?Missing required suffix arg}
url=${2:?Missing required url arg}
[[ "$url" == http*://*.${suffix} || "$url" == http*://*.${suffix}\?* ]]
}
function resolveEffectiveUrl() {
url="${1:?Missing required url argument}"
if ! curl -Ls -o /dev/null -w %{url_effective} "$url"; then
log "ERROR failed to resolve effective URL from $url"
exit 2
fi
}
function getFilenameFromUrl() {
url="${1:?Missing required url argument}"
strippedOfQuery="${url%\?*}"
basename "$strippedOfQuery"
}
function isTrue() {
function isTrue {
local value=${1,,}
result=
case ${value} in
true | on)
result=0
;;
*)
result=1
;;
true|on)
result=0
;;
*)
result=1
;;
esac
return ${result}
}
function isDebugging() {
if [[ -v DEBUG ]] && [[ ${DEBUG^^} == TRUE ]]; then
function isDebugging {
if [[ -v DEBUG ]] && [[ ${DEBUG^^} = TRUE ]]; then
return 0
else
return 1
fi
}
function debug() {
function debug {
if isDebugging; then
log "DEBUG: $*"
fi
}
function logn() {
function logn {
echo -n "[init] $*"
}
function log() {
function log {
echo "[init] $*"
}
function logAutopause() {
function logAutopause {
echo "[Autopause loop] $*"
}
function logAutopauseAction() {
function logAutopauseAction {
echo "[$(date -Iseconds)] [Autopause] $*"
}
function normalizeMemSize() {
function normalizeMemSize {
local scale=1
case ${1,,} in
*k)
scale=1024
;;
*m)
scale=1048576
;;
*g)
scale=1073741824
;;
*k)
scale=1024;;
*m)
scale=1048576;;
*g)
scale=1073741824;;
esac
val=${1:0:-1}
echo $((val * scale))
val=${1:0: -1}
echo $(( val * scale ))
}
function versionLessThan() {
function versionLessThan {
local activeParts
IFS=. read -ra activeParts <<<"${VANILLA_VERSION}"
IFS=. read -ra activeParts <<< "${VANILLA_VERSION}"
local givenParts
IFS=. read -ra givenParts <<<"$1"
IFS=. read -ra givenParts <<< "$1"
if ((${#activeParts[@]} < 2)); then
if (( ${#activeParts[@]} < 2 )); then
return 1
fi
if ((${#activeParts[@]} == 2)); then
if ((activeParts[0] < givenParts[0])) ||
((activeParts[0] == givenParts[0] && activeParts[1] < givenParts[1])); then
if (( ${#activeParts[@]} == 2 )); then
if (( activeParts[0] < givenParts[0] )) || \
(( activeParts[0] == givenParts[0] && activeParts[1] < givenParts[1] )); then
return 0
else
return 1
fi
else
if ((activeParts[0] < givenParts[0])) ||
((activeParts[0] == givenParts[0] && activeParts[1] < givenParts[1])) ||
((activeParts[0] == givenParts[0] && activeParts[1] == givenParts[1] && activeParts[2] < givenParts[2])); then
if (( activeParts[0] < givenParts[0] )) || \
(( activeParts[0] == givenParts[0] && activeParts[1] < givenParts[1] )) || \
(( activeParts[0] == givenParts[0] && activeParts[1] == givenParts[1] && activeParts[2] < givenParts[2] )); then
return 0
else
return 1
@@ -145,10 +115,10 @@ requireVar() {
}
function writeEula() {
if ! echo "# Generated via Docker on $(date)
if ! echo "# Generated via Docker on $(date)
eula=${EULA,,}
" >/data/eula.txt; then
log "ERROR: unable to write eula to /data. Please make sure attached directory is writable by uid=${UID}"
exit 2
fi
" > /data/eula.txt; then
log "ERROR: unable to write eula to /data. Please make sure attached directory is writable by uid=${UID}"
exit 2
fi
}