mirror of
https://github.com/restic/restic.git
synced 2026-02-22 16:56:24 +00:00
Compare commits
386 Commits
v0.11.0
...
debug-stat
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e1bb2129ad | ||
|
|
95b7f8dd81 | ||
|
|
29e39e247a | ||
|
|
27f241334e | ||
|
|
4e99a3d650 | ||
|
|
1cb1cd6f44 | ||
|
|
1a34260cf0 | ||
|
|
13d52c88fb | ||
|
|
4b5ca1e914 | ||
|
|
917f5b910a | ||
|
|
c0f2c1d871 | ||
|
|
9985368d46 | ||
|
|
2dd592a06c | ||
|
|
362338dd60 | ||
|
|
6ac032be64 | ||
|
|
0ce05d5725 | ||
|
|
0aed8d47d7 | ||
|
|
39a26066f7 | ||
|
|
47faf69230 | ||
|
|
b3dc127af5 | ||
|
|
8442c43209 | ||
|
|
6e942693ba | ||
|
|
5e22ae10f1 | ||
|
|
573221aa40 | ||
|
|
b8550a21f2 | ||
|
|
027a51529d | ||
|
|
5427119205 | ||
|
|
f647614e24 | ||
|
|
e0867c9682 | ||
|
|
f740b2fb23 | ||
|
|
0e5f2fff71 | ||
|
|
99228be623 | ||
|
|
04ca69cc78 | ||
|
|
f867e65bcd | ||
|
|
a00e27adf6 | ||
|
|
0858fbf6aa | ||
|
|
aef3658a5f | ||
|
|
200f09522d | ||
|
|
cbd88c457a | ||
|
|
1a0eb05bfa | ||
|
|
3c753c071c | ||
|
|
16313bfcc9 | ||
|
|
75f53955ee | ||
|
|
1632a84e7b | ||
|
|
b3d5bf7c99 | ||
|
|
57627a307f | ||
|
|
6ab7d49a03 | ||
|
|
a53778cd83 | ||
|
|
dd94efb307 | ||
|
|
8a486eafed | ||
|
|
4d576c2f79 | ||
|
|
f9e1fa26ff | ||
|
|
fb3cf3f885 | ||
|
|
e08e65dc30 | ||
|
|
daeb4cdf8f | ||
|
|
cdd704920d | ||
|
|
bbdf18c4a2 | ||
|
|
1f583b3d8e | ||
|
|
c73316a111 | ||
|
|
4526d5d197 | ||
|
|
dca9b6f5db | ||
|
|
a16ce65295 | ||
|
|
5c41120c70 | ||
|
|
5c617859ab | ||
|
|
81211750ba | ||
|
|
de7e3a0648 | ||
|
|
6bd8a2faaa | ||
|
|
58b5679f14 | ||
|
|
7b8886c052 | ||
|
|
ff95999246 | ||
|
|
b71c52797a | ||
|
|
82140967d3 | ||
|
|
43cb26010a | ||
|
|
35033d9b79 | ||
|
|
84822d44d4 | ||
|
|
58c7f4694d | ||
|
|
4d40c70214 | ||
|
|
44169d0dc4 | ||
|
|
6aa7e9f9c6 | ||
|
|
bdfedf1f5b | ||
|
|
b9cfe6f68a | ||
|
|
72eec8c0c4 | ||
|
|
68608a89ad | ||
|
|
1e306be000 | ||
|
|
ddb7697d29 | ||
|
|
313ad0e32f | ||
|
|
e2b0072441 | ||
|
|
505f8a2229 | ||
|
|
eda8c67616 | ||
|
|
258ce0c1e5 | ||
|
|
3d6a3e2555 | ||
|
|
0caad1e890 | ||
|
|
f2a1b125cb | ||
|
|
6e03f80ca2 | ||
|
|
1d7bb01a6b | ||
|
|
a4689eb3b9 | ||
|
|
c5a66e9181 | ||
|
|
b5972f184c | ||
|
|
d7dc19a496 | ||
|
|
f3442ce8a5 | ||
|
|
678e75e1c2 | ||
|
|
6b5b29dbee | ||
|
|
f35f2c48cd | ||
|
|
bcb852a8d0 | ||
|
|
aa0faa8c7d | ||
|
|
f7ec263a22 | ||
|
|
7d665fa1f4 | ||
|
|
69d5b4c36b | ||
|
|
36db248e30 | ||
|
|
eb72b10f55 | ||
|
|
622f4c7daa | ||
|
|
f8c50394d6 | ||
|
|
aa648bdcac | ||
|
|
e8abc79ce9 | ||
|
|
34a33565c8 | ||
|
|
7409225fa8 | ||
|
|
07b3f65a6f | ||
|
|
3e0acf1395 | ||
|
|
97388b3504 | ||
|
|
8b84c96d9d | ||
|
|
debc4a3a99 | ||
|
|
e1efc193e1 | ||
|
|
f0113139ea | ||
|
|
f6df94a50e | ||
|
|
31e56f1ad5 | ||
|
|
7fda2f2ad8 | ||
|
|
dec5008369 | ||
|
|
873505ed3b | ||
|
|
25ecf9eafb | ||
|
|
e88f3fb80c | ||
|
|
b2efa0af39 | ||
|
|
25f4acdaa8 | ||
|
|
cff4955a48 | ||
|
|
05a987b07c | ||
|
|
92da5168e1 | ||
|
|
34afc93ddc | ||
|
|
023eea6463 | ||
|
|
684600cf42 | ||
|
|
85fe5feadb | ||
|
|
969141b5e9 | ||
|
|
13ce981794 | ||
|
|
c2ef049f1b | ||
|
|
a488d4c847 | ||
|
|
4133b1ea65 | ||
|
|
46d2ca5095 | ||
|
|
334d8ce724 | ||
|
|
c661518df9 | ||
|
|
0d81f16343 | ||
|
|
3b09ae9074 | ||
|
|
18531e3d6f | ||
|
|
ca07317815 | ||
|
|
d0ca8fb0b8 | ||
|
|
08b7f2b58d | ||
|
|
e483b63c40 | ||
|
|
fc60b560ba | ||
|
|
736e964317 | ||
|
|
9c41e4a343 | ||
|
|
332b1896d1 | ||
|
|
cb6b0f6255 | ||
|
|
1e73aac610 | ||
|
|
2a1add7538 | ||
|
|
7dab113035 | ||
|
|
8efb874f48 | ||
|
|
de99207046 | ||
|
|
d8d2cc6dd9 | ||
|
|
68b74e359e | ||
|
|
b9f5d3fe13 | ||
|
|
a12c5f1d37 | ||
|
|
24474a36f4 | ||
|
|
ccc84af73d | ||
|
|
96904f8972 | ||
|
|
69f9d269eb | ||
|
|
ec59c73489 | ||
|
|
6c514adb8a | ||
|
|
edf89e1c74 | ||
|
|
f7c7c2f730 | ||
|
|
cfea79d0c5 | ||
|
|
5cd40f8b58 | ||
|
|
d32949ee54 | ||
|
|
83b10dbb12 | ||
|
|
e136dd8696 | ||
|
|
33adb58817 | ||
|
|
da9053b184 | ||
|
|
ef1aeb8724 | ||
|
|
2ca76afc2b | ||
|
|
89ab6d557e | ||
|
|
0256f95994 | ||
|
|
bfadc82a20 | ||
|
|
34b6130a0e | ||
|
|
22260d130d | ||
|
|
9341a83b05 | ||
|
|
66d904c905 | ||
|
|
746dbda413 | ||
|
|
f7784bddb3 | ||
|
|
1cdd38d9e0 | ||
|
|
b3c0d2f45b | ||
|
|
e96677cafb | ||
|
|
1d69341e88 | ||
|
|
36c5d39c2c | ||
|
|
7facc8ccc1 | ||
|
|
ba31c6fdaa | ||
|
|
b58799d83a | ||
|
|
0d5b764f90 | ||
|
|
d6b3859e48 | ||
|
|
b48f579530 | ||
|
|
401ef92c5f | ||
|
|
e329623771 | ||
|
|
26f85779be | ||
|
|
5b9ee56335 | ||
|
|
3264eae9f6 | ||
|
|
83c8a9b058 | ||
|
|
43cf301450 | ||
|
|
d05c88a5d6 | ||
|
|
058b102db0 | ||
|
|
fcebc7d250 | ||
|
|
f2959127b6 | ||
|
|
61460dee52 | ||
|
|
54a6d98945 | ||
|
|
f72f6c9c80 | ||
|
|
52b98f7f95 | ||
|
|
04d856e601 | ||
|
|
a7b49c4889 | ||
|
|
a568211b98 | ||
|
|
f70b10d0ee | ||
|
|
55bf76ba0c | ||
|
|
162117c42c | ||
|
|
82ae942965 | ||
|
|
f576d3d826 | ||
|
|
037f0a4c91 | ||
|
|
2f9346a5af | ||
|
|
a3105799c9 | ||
|
|
666768cd17 | ||
|
|
adc7a6555f | ||
|
|
9a97095a4c | ||
|
|
aa7a5f19c2 | ||
|
|
e3013271a6 | ||
|
|
92bd448691 | ||
|
|
c844580e0f | ||
|
|
67c938f232 | ||
|
|
a851c53cbe | ||
|
|
4960b841e6 | ||
|
|
ce5d630681 | ||
|
|
c3ddde9e7d | ||
|
|
cac481634c | ||
|
|
c23b1a4cba | ||
|
|
110a32a08b | ||
|
|
8e213e82fc | ||
|
|
8a150ee91f | ||
|
|
cb5ec7ea6b | ||
|
|
625410f003 | ||
|
|
75eff92b56 | ||
|
|
a24e986b2b | ||
|
|
6822ce8479 | ||
|
|
d857fb6e59 | ||
|
|
342520b648 | ||
|
|
028f2b8c0e | ||
|
|
1b6e8c888f | ||
|
|
5f3b802ee7 | ||
|
|
022dc35be9 | ||
|
|
1f43cac12d | ||
|
|
6da66c15d8 | ||
|
|
3500f9490c | ||
|
|
b8c7543a55 | ||
|
|
45ba456291 | ||
|
|
caac38ed27 | ||
|
|
15c537f9db | ||
|
|
8f9cea8cc0 | ||
|
|
3c0c0c132b | ||
|
|
9968220652 | ||
|
|
0649828555 | ||
|
|
3c03b35212 | ||
|
|
ab2b7d7f9a | ||
|
|
9a8a2cae4c | ||
|
|
9607cad267 | ||
|
|
3d1d5295cc | ||
|
|
30b6a0878a | ||
|
|
187c8fb259 | ||
|
|
1ec628ddf5 | ||
|
|
5898cb341f | ||
|
|
43732bb885 | ||
|
|
6feaf6bd1f | ||
|
|
c45f8ee075 | ||
|
|
3601a9b6cd | ||
|
|
fdec8051ab | ||
|
|
333c5a19d4 | ||
|
|
a8ad6b9a4b | ||
|
|
b0882b3f3c | ||
|
|
e74110a833 | ||
|
|
ae441d3134 | ||
|
|
913a34f568 | ||
|
|
468612b108 | ||
|
|
7eabcabf68 | ||
|
|
17bb77b1f9 | ||
|
|
80dcfca191 | ||
|
|
94a154c7ca | ||
|
|
219d9a62f2 | ||
|
|
e8713bc209 | ||
|
|
04d1983800 | ||
|
|
88208c3db2 | ||
|
|
59ea5a4208 | ||
|
|
145830005b | ||
|
|
8ad9f88993 | ||
|
|
859d89b032 | ||
|
|
f9223cd827 | ||
|
|
0be906a92f | ||
|
|
dfb9326b1b | ||
|
|
e4e0ce09ad | ||
|
|
0334114865 | ||
|
|
b1bbdcb637 | ||
|
|
4a0b7328ec | ||
|
|
3e0456d88b | ||
|
|
dd94174379 | ||
|
|
63e32c44c0 | ||
|
|
f013662e3f | ||
|
|
4320ff2bbf | ||
|
|
354b7e89cc | ||
|
|
829959390a | ||
|
|
ccd55d529d | ||
|
|
4ddcc17135 | ||
|
|
407843c5f9 | ||
|
|
46d31ab86d | ||
|
|
c986823d3f | ||
|
|
239931578c | ||
|
|
9df52327cc | ||
|
|
21b787a4d1 | ||
|
|
ddca699cd2 | ||
|
|
605db3b389 | ||
|
|
8de129e12f | ||
|
|
2072f0a481 | ||
|
|
5731e391f8 | ||
|
|
6a0a1d1f1c | ||
|
|
a8d21b5dcf | ||
|
|
823d0afd6e | ||
|
|
a5989707ac | ||
|
|
3a0cfafeb5 | ||
|
|
c923bd957d | ||
|
|
1a3f885d3d | ||
|
|
3bf43d7951 | ||
|
|
561da92396 | ||
|
|
5cf42884c8 | ||
|
|
9e4e0077fb | ||
|
|
1758da855f | ||
|
|
15ea90feed | ||
|
|
826cfa0533 | ||
|
|
fef408a8bd | ||
|
|
a2d4209322 | ||
|
|
275f713211 | ||
|
|
4707bdb204 | ||
|
|
47277c4b4c | ||
|
|
d2e53730d6 | ||
|
|
fd33030556 | ||
|
|
38cc4393f6 | ||
|
|
7f6f31c34b | ||
|
|
164b4cb2f6 | ||
|
|
4a9b05aff1 | ||
|
|
aaf1c44362 | ||
|
|
a5592e83f7 | ||
|
|
ab2790d9de | ||
|
|
8a2a326189 | ||
|
|
86b5d8ffaa | ||
|
|
636b2f2e94 | ||
|
|
ae5302c7a8 | ||
|
|
866a52ad4e | ||
|
|
a4507610a0 | ||
|
|
7def2d8ea7 | ||
|
|
ee0112ab3b | ||
|
|
b373f164fe | ||
|
|
8a0dbe7c1a | ||
|
|
4e3ad8b3b1 | ||
|
|
5144141321 | ||
|
|
d35d279455 | ||
|
|
1ca60bccfb | ||
|
|
7f86eb4ec0 | ||
|
|
c1a3de4a6e | ||
|
|
f8c4dd7b1a | ||
|
|
a5b80452fe | ||
|
|
aff1e220f5 | ||
|
|
095155d9ce | ||
|
|
1dd9fdce74 | ||
|
|
b2f5381737 | ||
|
|
7f9a0a5907 | ||
|
|
3b591ed987 | ||
|
|
ce7d613749 | ||
|
|
581d90cf91 | ||
|
|
55c3a90a0d | ||
|
|
d3c59d18e5 |
258
.github/workflows/tests.yml
vendored
Normal file
258
.github/workflows/tests.yml
vendored
Normal file
@@ -0,0 +1,258 @@
|
||||
name: test
|
||||
on:
|
||||
# run tests on push to master, but not when other branches are pushed to
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
|
||||
# run tests for all pull requests
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
test:
|
||||
strategy:
|
||||
matrix:
|
||||
# list of jobs to run:
|
||||
include:
|
||||
- job_name: Windows
|
||||
go: 1.15.x
|
||||
os: windows-latest
|
||||
|
||||
- job_name: macOS
|
||||
go: 1.15.x
|
||||
os: macOS-latest
|
||||
test_fuse: false
|
||||
|
||||
- job_name: Linux
|
||||
go: 1.15.x
|
||||
os: ubuntu-latest
|
||||
test_cloud_backends: true
|
||||
test_fuse: true
|
||||
check_changelog: true
|
||||
|
||||
- job_name: Linux
|
||||
go: 1.14.x
|
||||
os: ubuntu-latest
|
||||
test_fuse: true
|
||||
|
||||
- job_name: Linux
|
||||
go: 1.13.x
|
||||
os: ubuntu-latest
|
||||
test_fuse: true
|
||||
|
||||
name: ${{ matrix.job_name }} Go ${{ matrix.go }}
|
||||
runs-on: ${{ matrix.os }}
|
||||
|
||||
env:
|
||||
GOPROXY: https://proxy.golang.org
|
||||
|
||||
steps:
|
||||
- name: Set up Go ${{ matrix.go }}
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: ${{ matrix.go }}
|
||||
|
||||
- name: Get programs (Linux/macOS)
|
||||
run: |
|
||||
echo "build Go tools"
|
||||
go get github.com/restic/rest-server/...
|
||||
|
||||
echo "install minio server"
|
||||
mkdir $HOME/bin
|
||||
if [ "$RUNNER_OS" == "macOS" ]; then
|
||||
wget --no-verbose -O $HOME/bin/minio https://dl.minio.io/server/minio/release/darwin-amd64/minio
|
||||
else
|
||||
wget --no-verbose -O $HOME/bin/minio https://dl.minio.io/server/minio/release/linux-amd64/minio
|
||||
fi
|
||||
chmod 755 $HOME/bin/minio
|
||||
|
||||
echo "install rclone"
|
||||
if [ "$RUNNER_OS" == "macOS" ]; then
|
||||
wget --no-verbose -O rclone.zip https://downloads.rclone.org/rclone-current-osx-amd64.zip
|
||||
else
|
||||
wget --no-verbose -O rclone.zip https://downloads.rclone.org/rclone-current-linux-amd64.zip
|
||||
fi
|
||||
unzip rclone.zip
|
||||
cp rclone*/rclone $HOME/bin
|
||||
chmod 755 $HOME/bin/rclone
|
||||
rm -rf rclone*
|
||||
|
||||
# add $HOME/bin to path ($GOBIN was already added to the path by setup-go@v2)
|
||||
echo $HOME/bin >> $GITHUB_PATH
|
||||
if: matrix.os == 'ubuntu-latest' || matrix.os == 'macOS-latest'
|
||||
|
||||
- name: Get programs (Windows)
|
||||
shell: powershell
|
||||
run: |
|
||||
$ProgressPreference = 'SilentlyContinue'
|
||||
|
||||
echo "build Go tools"
|
||||
go get github.com/restic/rest-server/...
|
||||
|
||||
echo "install minio server"
|
||||
mkdir $Env:USERPROFILE/bin
|
||||
Invoke-WebRequest https://dl.minio.io/server/minio/release/windows-amd64/minio.exe -OutFile $Env:USERPROFILE/bin/minio.exe
|
||||
|
||||
echo "install rclone"
|
||||
Invoke-WebRequest https://downloads.rclone.org/rclone-current-windows-amd64.zip -OutFile rclone.zip
|
||||
|
||||
unzip rclone.zip
|
||||
copy rclone*/rclone.exe $Env:USERPROFILE/bin
|
||||
|
||||
# add $USERPROFILE/bin to path ($GOBIN was already added to the path by setup-go@v2)
|
||||
echo $Env:USERPROFILE\bin >> $Env:GITHUB_PATH
|
||||
|
||||
echo "install tar"
|
||||
cd $env:USERPROFILE
|
||||
mkdir tar
|
||||
cd tar
|
||||
|
||||
# install exactly these versions of tar and the libraries, other combinations might not work!
|
||||
|
||||
Invoke-WebRequest https://github.com/restic/test-assets/raw/master/tar-1.13-1-bin.zip -OutFile tar.zip
|
||||
unzip tar.zip
|
||||
Invoke-WebRequest https://github.com/restic/test-assets/raw/master/libintl-0.11.5-2-bin.zip -OutFile libintl.zip
|
||||
unzip libintl.zip
|
||||
Invoke-WebRequest https://github.com/restic/test-assets/raw/master/libiconv-1.8-1-bin.zip -OutFile libiconv.zip
|
||||
unzip libiconv.zip
|
||||
|
||||
# add $USERPROFILE/tar/bin to path
|
||||
echo $Env:USERPROFILE\tar\bin >> $Env:GITHUB_PATH
|
||||
if: matrix.os == 'windows-latest'
|
||||
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: Build with build.go
|
||||
run: |
|
||||
go run build.go
|
||||
|
||||
- name: Run local Tests
|
||||
env:
|
||||
RESTIC_TEST_FUSE: ${{ matrix.test_fuse }}
|
||||
run: |
|
||||
go test -cover ./...
|
||||
|
||||
- name: Test cloud backends
|
||||
env:
|
||||
RESTIC_TEST_S3_KEY: ${{ secrets.RESTIC_TEST_S3_KEY }}
|
||||
RESTIC_TEST_S3_SECRET: ${{ secrets.RESTIC_TEST_S3_SECRET }}
|
||||
RESTIC_TEST_S3_REPOSITORY: ${{ secrets.RESTIC_TEST_S3_REPOSITORY }}
|
||||
RESTIC_TEST_AZURE_ACCOUNT_NAME: ${{ secrets.RESTIC_TEST_AZURE_ACCOUNT_NAME }}
|
||||
RESTIC_TEST_AZURE_ACCOUNT_KEY: ${{ secrets.RESTIC_TEST_AZURE_ACCOUNT_KEY }}
|
||||
RESTIC_TEST_AZURE_REPOSITORY: ${{ secrets.RESTIC_TEST_AZURE_REPOSITORY }}
|
||||
RESTIC_TEST_B2_ACCOUNT_ID: ${{ secrets.RESTIC_TEST_B2_ACCOUNT_ID }}
|
||||
RESTIC_TEST_B2_ACCOUNT_KEY: ${{ secrets.RESTIC_TEST_B2_ACCOUNT_KEY }}
|
||||
RESTIC_TEST_B2_REPOSITORY: ${{ secrets.RESTIC_TEST_B2_REPOSITORY }}
|
||||
RESTIC_TEST_GS_REPOSITORY: ${{ secrets.RESTIC_TEST_GS_REPOSITORY }}
|
||||
RESTIC_TEST_GS_PROJECT_ID: ${{ secrets.RESTIC_TEST_GS_PROJECT_ID }}
|
||||
GOOGLE_PROJECT_ID: ${{ secrets.RESTIC_TEST_GS_PROJECT_ID }}
|
||||
RESTIC_TEST_GS_APPLICATION_CREDENTIALS_B64: ${{ secrets.RESTIC_TEST_GS_APPLICATION_CREDENTIALS_B64 }}
|
||||
RESTIC_TEST_OS_AUTH_URL: ${{ secrets.RESTIC_TEST_OS_AUTH_URL }}
|
||||
RESTIC_TEST_OS_TENANT_NAME: ${{ secrets.RESTIC_TEST_OS_TENANT_NAME }}
|
||||
RESTIC_TEST_OS_USERNAME: ${{ secrets.RESTIC_TEST_OS_USERNAME }}
|
||||
RESTIC_TEST_OS_PASSWORD: ${{ secrets.RESTIC_TEST_OS_PASSWORD }}
|
||||
RESTIC_TEST_OS_REGION_NAME: ${{ secrets.RESTIC_TEST_OS_REGION_NAME }}
|
||||
RESTIC_TEST_SWIFT: ${{ secrets.RESTIC_TEST_SWIFT }}
|
||||
# fail if any of the following tests cannot be run
|
||||
RESTIC_TEST_DISALLOW_SKIP: "restic/backend/rest.TestBackendREST,\
|
||||
restic/backend/sftp.TestBackendSFTP,\
|
||||
restic/backend/s3.TestBackendMinio,\
|
||||
restic/backend/rclone.TestBackendRclone,\
|
||||
restic/backend/s3.TestBackendS3,\
|
||||
restic/backend/swift.TestBackendSwift,\
|
||||
restic/backend/b2.TestBackendB2,\
|
||||
restic/backend/gs.TestBackendGS,\
|
||||
restic/backend/azure.TestBackendAzure"
|
||||
run: |
|
||||
# prepare credentials for Google Cloud Storage tests in a temp file
|
||||
export GOOGLE_APPLICATION_CREDENTIALS=$(mktemp --tmpdir restic-gcs-auth-XXXXXXX)
|
||||
echo $RESTIC_TEST_GS_APPLICATION_CREDENTIALS_B64 | base64 -d > $GOOGLE_APPLICATION_CREDENTIALS
|
||||
go test -cover -parallel 4 ./internal/backend/...
|
||||
|
||||
# only run cloud backend tests for pull requests from and pushes to our
|
||||
# own repo, otherwise the secrets are not available
|
||||
if: (github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository) && matrix.test_cloud_backends
|
||||
|
||||
- name: Check changelog files with calens
|
||||
run: |
|
||||
echo "install calens"
|
||||
go get github.com/restic/calens
|
||||
|
||||
echo "check changelog files"
|
||||
calens
|
||||
if: matrix.check_changelog
|
||||
|
||||
cross_compile:
|
||||
strategy:
|
||||
|
||||
# ATTENTION: the list of architectures must be in sync with helpers/build-release-binaries/main.go!
|
||||
matrix:
|
||||
# run cross-compile in two batches parallel so the overall tests run faster
|
||||
targets:
|
||||
- "linux/386 linux/amd64 linux/arm linux/arm64 linux/ppc64le linux/mips linux/mipsle linux/mips64 linux/mips64le \
|
||||
openbsd/386 openbsd/amd64"
|
||||
|
||||
- "freebsd/386 freebsd/amd64 freebsd/arm \
|
||||
aix/ppc64 \
|
||||
darwin/amd64 \
|
||||
netbsd/386 netbsd/amd64 \
|
||||
windows/386 windows/amd64 \
|
||||
solaris/amd64"
|
||||
|
||||
env:
|
||||
go: 1.15.x
|
||||
GOPROXY: https://proxy.golang.org
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
name: Cross Compile for ${{ matrix.targets }}
|
||||
|
||||
steps:
|
||||
- name: Set up Go ${{ env.go }}
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: ${{ env.go }}
|
||||
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: Install gox
|
||||
run: |
|
||||
go get github.com/mitchellh/gox
|
||||
|
||||
- name: Cross-compile with gox for ${{ matrix.targets }}
|
||||
env:
|
||||
GOFLAGS: "-trimpath"
|
||||
GOX_ARCHS: "${{ matrix.targets }}"
|
||||
run: |
|
||||
mkdir build-output
|
||||
gox -parallel 2 -verbose -osarch "$GOX_ARCHS" -output "build-output/{{.Dir}}_{{.OS}}_{{.Arch}}" ./cmd/restic
|
||||
gox -parallel 2 -verbose -osarch "$GOX_ARCHS" -tags debug -output "build-output/{{.Dir}}_{{.OS}}_{{.Arch}}_debug" ./cmd/restic
|
||||
|
||||
lint:
|
||||
name: lint
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: golangci-lint
|
||||
uses: golangci/golangci-lint-action@v2
|
||||
with:
|
||||
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
|
||||
version: v1.36
|
||||
# Optional: show only new issues if it's a pull request. The default value is `false`.
|
||||
only-new-issues: true
|
||||
args: --verbose --timeout 5m
|
||||
|
||||
# only run golangci-lint for pull requests, otherwise ALL hints get
|
||||
# reported. We need to slowly address all issues until we can enable
|
||||
# linting the master branch :)
|
||||
if: github.event_name == 'pull_request'
|
||||
|
||||
- name: Check go.mod/go.sum
|
||||
run: |
|
||||
echo "check if go.mod and go.sum are up to date"
|
||||
go mod tidy
|
||||
git diff --exit-code go.mod go.sum
|
||||
57
.golangci.yml
Normal file
57
.golangci.yml
Normal file
@@ -0,0 +1,57 @@
|
||||
# This is the configuration for golangci-lint for the restic project.
|
||||
#
|
||||
# A sample config with all settings is here:
|
||||
# https://github.com/golangci/golangci-lint/blob/master/.golangci.example.yml
|
||||
|
||||
linters:
|
||||
# only enable the linters listed below
|
||||
disable-all: true
|
||||
enable:
|
||||
# make sure all errors returned by functions are handled
|
||||
- errcheck
|
||||
|
||||
# find unused code
|
||||
- deadcode
|
||||
|
||||
# show how code can be simplified
|
||||
- gosimple
|
||||
|
||||
# # make sure code is formatted
|
||||
- gofmt
|
||||
|
||||
# examine code and report suspicious constructs, such as Printf calls whose
|
||||
# arguments do not align with the format string
|
||||
- govet
|
||||
|
||||
# make sure names and comments are used according to the conventions
|
||||
- golint
|
||||
|
||||
# detect when assignments to existing variables are not used
|
||||
- ineffassign
|
||||
|
||||
# run static analysis and find errors
|
||||
- staticcheck
|
||||
|
||||
# find unused variables, functions, structs, types, etc.
|
||||
- unused
|
||||
|
||||
# find unused struct fields
|
||||
- structcheck
|
||||
|
||||
# find unused global variables
|
||||
- varcheck
|
||||
|
||||
# parse and typecheck code
|
||||
- typecheck
|
||||
|
||||
issues:
|
||||
# don't use the default exclude rules, this hides (among others) ignored
|
||||
# errors from Close() calls
|
||||
exclude-use-default: false
|
||||
|
||||
# list of things to not warn about
|
||||
exclude:
|
||||
# golint: do not warn about missing comments for exported stuff
|
||||
- exported (function|method|var|type|const) `.*` should have comment or be unexported
|
||||
# golint: ignore constants in all caps
|
||||
- don't use ALL_CAPS in Go names; use CamelCase
|
||||
@@ -1,2 +0,0 @@
|
||||
go:
|
||||
enabled: true
|
||||
58
.travis.yml
58
.travis.yml
@@ -1,58 +0,0 @@
|
||||
language: go
|
||||
sudo: false
|
||||
|
||||
matrix:
|
||||
include:
|
||||
- os: linux
|
||||
go: "1.13.x"
|
||||
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
||||
cache:
|
||||
directories:
|
||||
- $HOME/.cache/go-build
|
||||
- $HOME/gopath/pkg/mod
|
||||
|
||||
- os: linux
|
||||
go: "1.14.x"
|
||||
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
||||
cache:
|
||||
directories:
|
||||
- $HOME/.cache/go-build
|
||||
- $HOME/gopath/pkg/mod
|
||||
|
||||
# only run fuse and cloud backends tests on Travis for the latest Go on Linux
|
||||
- os: linux
|
||||
go: "1.15.x"
|
||||
sudo: true
|
||||
cache:
|
||||
directories:
|
||||
- $HOME/.cache/go-build
|
||||
- $HOME/gopath/pkg/mod
|
||||
|
||||
- os: osx
|
||||
go: "1.15.x"
|
||||
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
||||
cache:
|
||||
directories:
|
||||
- $HOME/Library/Caches/go-build
|
||||
- $HOME/gopath/pkg/mod
|
||||
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
|
||||
notifications:
|
||||
irc:
|
||||
channels:
|
||||
- "chat.freenode.net#restic"
|
||||
on_success: change
|
||||
on_failure: change
|
||||
skip_join: true
|
||||
|
||||
install:
|
||||
- go version
|
||||
- export GOBIN="$GOPATH/bin"
|
||||
- export PATH="$PATH:$GOBIN"
|
||||
- go env
|
||||
|
||||
script:
|
||||
- go run run_integration_tests.go
|
||||
497
CHANGELOG.md
497
CHANGELOG.md
@@ -1,3 +1,500 @@
|
||||
Changelog for restic 0.12.0 (2021-02-14)
|
||||
=======================================
|
||||
|
||||
The following sections list the changes in restic 0.12.0 relevant to
|
||||
restic users. The changes are ordered by importance.
|
||||
|
||||
Summary
|
||||
-------
|
||||
|
||||
* Fix #1681: Make `mount` not create missing mount point directory
|
||||
* Fix #1800: Ignore `no data available` filesystem error during backup
|
||||
* Fix #2563: Report the correct owner of directories in FUSE mounts
|
||||
* Fix #2688: Make `backup` and `tag` commands separate tags by comma
|
||||
* Fix #2739: Make the `cat` command respect the `--no-lock` option
|
||||
* Fix #3087: The `--use-fs-snapshot` option now works on windows/386
|
||||
* Fix #3100: Do not require gs bucket permissions when running `init`
|
||||
* Fix #3111: Correctly detect output redirection for `backup` command on Windows
|
||||
* Fix #3151: Don't create invalid snapshots when `backup` is interrupted
|
||||
* Fix #3166: Improve error handling in the `restore` command
|
||||
* Fix #3232: Correct statistics for overlapping targets
|
||||
* Fix #3014: Fix sporadic stream reset between rclone and restic
|
||||
* Fix #3152: Do not hang until foregrounded when completed in background
|
||||
* Fix #3249: Improve error handling in `gs` backend
|
||||
* Chg #3095: Deleting files on Google Drive now moves them to the trash
|
||||
* Enh #2186: Allow specifying percentage in `check --read-data-subset`
|
||||
* Enh #2453: Report permanent/fatal backend errors earlier
|
||||
* Enh #2528: Add Alibaba/Aliyun OSS support in the `s3` backend
|
||||
* Enh #2706: Configurable progress reports for non-interactive terminals
|
||||
* Enh #2944: Add `backup` options `--files-from-{verbatim,raw}`
|
||||
* Enh #3083: Allow usage of deprecated S3 `ListObjects` API
|
||||
* Enh #3147: Support additional environment variables for Swift authentication
|
||||
* Enh #3191: Add release binaries for MIPS architectures
|
||||
* Enh #909: Back up mountpoints as empty directories
|
||||
* Enh #3250: Add several more error checks
|
||||
* Enh #2718: Improve `prune` performance and make it more customizable
|
||||
* Enh #2495: Add option to let `backup` trust mtime without checking ctime
|
||||
* Enh #2941: Speed up the repacking step of the `prune` command
|
||||
* Enh #3006: Speed up the `rebuild-index` command
|
||||
* Enh #3048: Add more checks for index and pack files in the `check` command
|
||||
* Enh #2433: Make the `dump` command support `zip` format
|
||||
* Enh #3099: Reduce memory usage of `check` command
|
||||
* Enh #3106: Parallelize scan of snapshot content in `copy` and `prune`
|
||||
* Enh #3130: Parallelize reading of locks and snapshots
|
||||
* Enh #3254: Enable HTTP/2 for backend connections
|
||||
|
||||
Details
|
||||
-------
|
||||
|
||||
* Bugfix #1681: Make `mount` not create missing mount point directory
|
||||
|
||||
When specifying a non-existent directory as mount point for the `mount` command, restic used
|
||||
to create the specified directory automatically.
|
||||
|
||||
This has now changed such that restic instead gives an error when the specified directory for
|
||||
the mount point does not exist.
|
||||
|
||||
https://github.com/restic/restic/issues/1681
|
||||
https://github.com/restic/restic/pull/3008
|
||||
|
||||
* Bugfix #1800: Ignore `no data available` filesystem error during backup
|
||||
|
||||
Restic was unable to backup files on some filesystems, for example certain configurations of
|
||||
CIFS on Linux which return a `no data available` error when reading extended attributes. These
|
||||
errors are now ignored.
|
||||
|
||||
https://github.com/restic/restic/issues/1800
|
||||
https://github.com/restic/restic/pull/3034
|
||||
|
||||
* Bugfix #2563: Report the correct owner of directories in FUSE mounts
|
||||
|
||||
Restic 0.10.0 changed the FUSE mount to always report the current user as the owner of
|
||||
directories within the FUSE mount, which is incorrect.
|
||||
|
||||
This is now changed back to reporting the correct owner of a directory.
|
||||
|
||||
https://github.com/restic/restic/issues/2563
|
||||
https://github.com/restic/restic/pull/3141
|
||||
|
||||
* Bugfix #2688: Make `backup` and `tag` commands separate tags by comma
|
||||
|
||||
Running `restic backup --tag foo,bar` previously created snapshots with one single tag
|
||||
containing a comma (`foo,bar`) instead of two tags (`foo`, `bar`).
|
||||
|
||||
Similarly, the `tag` command's `--set`, `--add` and `--remove` options would treat
|
||||
`foo,bar` as one tag instead of two tags. This was inconsistent with other commands and often
|
||||
unexpected when one intended `foo,bar` to mean two tags.
|
||||
|
||||
To be consistent in all commands, restic now interprets `foo,bar` to mean two separate tags
|
||||
(`foo` and `bar`) instead of one tag (`foo,bar`) everywhere, including in the `backup` and
|
||||
`tag` commands.
|
||||
|
||||
NOTE: This change might result in unexpected behavior in cases where you use the `forget`
|
||||
command and filter on tags like `foo,bar`. Snapshots previously backed up with `--tag
|
||||
foo,bar` will still not match that filter, but snapshots saved from now on will match that
|
||||
filter.
|
||||
|
||||
To replace `foo,bar` tags with `foo` and `bar` tags in old snapshots, you can first generate a
|
||||
list of the relevant snapshots using a command like:
|
||||
|
||||
Restic snapshots --json --quiet | jq '.[] | select(contains({tags: ["foo,bar"]})) | .id'
|
||||
|
||||
And then use `restic tag --set foo --set bar snapshotID [...]` to set the new tags. Please adjust
|
||||
the commands to include real tag names and any additional tags, as well as the list of snapshots
|
||||
to process.
|
||||
|
||||
https://github.com/restic/restic/issues/2688
|
||||
https://github.com/restic/restic/pull/2690
|
||||
https://github.com/restic/restic/pull/3197
|
||||
|
||||
* Bugfix #2739: Make the `cat` command respect the `--no-lock` option
|
||||
|
||||
The `cat` command would not respect the `--no-lock` flag. This is now fixed.
|
||||
|
||||
https://github.com/restic/restic/issues/2739
|
||||
|
||||
* Bugfix #3087: The `--use-fs-snapshot` option now works on windows/386
|
||||
|
||||
Restic failed to create VSS snapshots on windows/386 with the following error:
|
||||
|
||||
GetSnapshotProperties() failed: E_INVALIDARG (0x80070057)
|
||||
|
||||
This is now fixed.
|
||||
|
||||
https://github.com/restic/restic/issues/3087
|
||||
https://github.com/restic/restic/pull/3090
|
||||
|
||||
* Bugfix #3100: Do not require gs bucket permissions when running `init`
|
||||
|
||||
Restic used to require bucket level permissions for the `gs` backend in order to initialize a
|
||||
restic repository.
|
||||
|
||||
It now allows a `gs` service account to initialize a repository if the bucket does exist and the
|
||||
service account has permissions to write/read to that bucket.
|
||||
|
||||
https://github.com/restic/restic/issues/3100
|
||||
|
||||
* Bugfix #3111: Correctly detect output redirection for `backup` command on Windows
|
||||
|
||||
On Windows, since restic 0.10.0 the `backup` command did not properly detect when the output
|
||||
was redirected to a file. This caused restic to output terminal control characters. This has
|
||||
been fixed by correcting the terminal detection.
|
||||
|
||||
https://github.com/restic/restic/issues/3111
|
||||
https://github.com/restic/restic/pull/3150
|
||||
|
||||
* Bugfix #3151: Don't create invalid snapshots when `backup` is interrupted
|
||||
|
||||
When canceling a backup run at a certain moment it was possible that restic created a snapshot
|
||||
with an invalid "null" tree. This caused `check` and other operations to fail. The `backup`
|
||||
command now properly handles interruptions and never saves a snapshot when interrupted.
|
||||
|
||||
https://github.com/restic/restic/issues/3151
|
||||
https://github.com/restic/restic/pull/3164
|
||||
|
||||
* Bugfix #3166: Improve error handling in the `restore` command
|
||||
|
||||
The `restore` command used to not print errors while downloading file contents from the
|
||||
repository. It also incorrectly exited with a zero error code even when there were errors
|
||||
during the restore process. This has all been fixed and `restore` now returns with a non-zero
|
||||
exit code when there's an error.
|
||||
|
||||
https://github.com/restic/restic/issues/3166
|
||||
https://github.com/restic/restic/pull/3207
|
||||
|
||||
* Bugfix #3232: Correct statistics for overlapping targets
|
||||
|
||||
A user reported that restic's statistics and progress information during backup was not
|
||||
correctly calculated when the backup targets (files/dirs to save) overlap. For example,
|
||||
consider a directory `foo` which contains (among others) a file `foo/bar`. When `restic
|
||||
backup foo foo/bar` was run, restic counted the size of the file `foo/bar` twice, so the
|
||||
completeness percentage as well as the number of files was wrong. This is now corrected.
|
||||
|
||||
https://github.com/restic/restic/issues/3232
|
||||
https://github.com/restic/restic/pull/3243
|
||||
|
||||
* Bugfix #3014: Fix sporadic stream reset between rclone and restic
|
||||
|
||||
Sometimes when using restic with the `rclone` backend, an error message similar to the
|
||||
following would be printed:
|
||||
|
||||
Didn't finish writing GET request (wrote 0/xxx): http2: stream closed
|
||||
|
||||
It was found that this was caused by restic closing the connection to rclone to soon when
|
||||
downloading data. A workaround has been added which waits for the end of the download before
|
||||
closing the connection.
|
||||
|
||||
https://github.com/rclone/rclone/issues/2598
|
||||
https://github.com/restic/restic/pull/3014
|
||||
|
||||
* Bugfix #3152: Do not hang until foregrounded when completed in background
|
||||
|
||||
On Linux, when running in the background restic failed to stop the terminal output of the
|
||||
`backup` command after it had completed. This caused restic to hang until moved to the
|
||||
foreground. This has now been fixed.
|
||||
|
||||
https://github.com/restic/restic/pull/3152
|
||||
https://forum.restic.net/t/restic-alpine-container-cron-hangs-epoll-pwait/3334
|
||||
|
||||
* Bugfix #3249: Improve error handling in `gs` backend
|
||||
|
||||
The `gs` backend did not notice when the last step of completing a file upload failed. Under rare
|
||||
circumstances, this could cause missing files in the backup repository. This has now been
|
||||
fixed.
|
||||
|
||||
https://github.com/restic/restic/pull/3249
|
||||
|
||||
* Change #3095: Deleting files on Google Drive now moves them to the trash
|
||||
|
||||
When deleting files on Google Drive via the `rclone` backend, restic used to bypass the trash
|
||||
folder required that one used the `-o rclone.args` option to enable usage of the trash folder.
|
||||
This ensured that deleted files in Google Drive were not kept indefinitely in the trash folder.
|
||||
However, since Google Drive's trash retention policy changed to deleting trashed files after
|
||||
30 days, this is no longer needed.
|
||||
|
||||
Restic now leaves it up to rclone and its configuration to use or not use the trash folder when
|
||||
deleting files. The default is to use the trash folder, as of rclone 1.53.2. To re-enable the
|
||||
restic 0.11 behavior, set the `RCLONE_DRIVE_USE_TRASH` environment variable or change the
|
||||
rclone configuration. See the rclone documentation for more details.
|
||||
|
||||
https://github.com/restic/restic/issues/3095
|
||||
https://github.com/restic/restic/pull/3102
|
||||
|
||||
* Enhancement #2186: Allow specifying percentage in `check --read-data-subset`
|
||||
|
||||
We've enhanced the `check` command's `--read-data-subset` option to also accept a
|
||||
percentage (e.g. `2.5%` or `10%`). This will check the given percentage of pack files (which
|
||||
are randomly selected on each run).
|
||||
|
||||
https://github.com/restic/restic/issues/2186
|
||||
https://github.com/restic/restic/pull/3038
|
||||
|
||||
* Enhancement #2453: Report permanent/fatal backend errors earlier
|
||||
|
||||
When encountering errors in reading from or writing to storage backends, restic retries the
|
||||
failing operation up to nine times (for a total of ten attempts). It used to retry all backend
|
||||
operations, but now detects some permanent error conditions so that it can report fatal errors
|
||||
earlier.
|
||||
|
||||
Permanent failures include local disks being full, SSH connections dropping and permission
|
||||
errors.
|
||||
|
||||
https://github.com/restic/restic/issues/2453
|
||||
https://github.com/restic/restic/issues/3180
|
||||
https://github.com/restic/restic/pull/3170
|
||||
https://github.com/restic/restic/pull/3181
|
||||
|
||||
* Enhancement #2528: Add Alibaba/Aliyun OSS support in the `s3` backend
|
||||
|
||||
A new extended option `s3.bucket-lookup` has been added to support Alibaba/Aliyun OSS in the
|
||||
`s3` backend. The option can be set to one of the following values:
|
||||
|
||||
- `auto` - Existing behaviour - `dns` - Use DNS style bucket access - `path` - Use path style
|
||||
bucket access
|
||||
|
||||
To make the `s3` backend work with Alibaba/Aliyun OSS you must set `s3.bucket-lookup` to `dns`
|
||||
and set the `s3.region` parameter. For example:
|
||||
|
||||
Restic -o s3.bucket-lookup=dns -o s3.region=oss-eu-west-1 -r
|
||||
s3:https://oss-eu-west-1.aliyuncs.com/bucketname init
|
||||
|
||||
Note that `s3.region` must be set, otherwise the MinIO SDK tries to look it up and it seems that
|
||||
Alibaba doesn't support that properly.
|
||||
|
||||
https://github.com/restic/restic/issues/2528
|
||||
https://github.com/restic/restic/pull/2535
|
||||
|
||||
* Enhancement #2706: Configurable progress reports for non-interactive terminals
|
||||
|
||||
The `backup`, `check` and `prune` commands never printed any progress reports on
|
||||
non-interactive terminals. This behavior is now configurable using the
|
||||
`RESTIC_PROGRESS_FPS` environment variable. Use for example a value of `1` for an update
|
||||
every second, or `0.01666` for an update every minute.
|
||||
|
||||
The `backup` command now also prints the current progress when restic receives a `SIGUSR1`
|
||||
signal.
|
||||
|
||||
Setting the `RESTIC_PROGRESS_FPS` environment variable or sending a `SIGUSR1` signal
|
||||
prints a status report even when `--quiet` was specified.
|
||||
|
||||
https://github.com/restic/restic/issues/2706
|
||||
https://github.com/restic/restic/issues/3194
|
||||
https://github.com/restic/restic/pull/3199
|
||||
|
||||
* Enhancement #2944: Add `backup` options `--files-from-{verbatim,raw}`
|
||||
|
||||
The new `backup` options `--files-from-verbatim` and `--files-from-raw` read a list of
|
||||
files to back up from a file. Unlike the existing `--files-from` option, these options do not
|
||||
interpret the listed filenames as glob patterns; instead, whitespace in filenames is
|
||||
preserved as-is and no pattern expansion is done. Please see the documentation for specifics.
|
||||
|
||||
These new options are highly recommended over `--files-from`, when using a script to generate
|
||||
the list of files to back up.
|
||||
|
||||
https://github.com/restic/restic/issues/2944
|
||||
https://github.com/restic/restic/issues/3013
|
||||
|
||||
* Enhancement #3083: Allow usage of deprecated S3 `ListObjects` API
|
||||
|
||||
Some S3 API implementations, e.g. Ceph before version 14.2.5, have a broken `ListObjectsV2`
|
||||
implementation which causes problems for restic when using their API endpoints. When a broken
|
||||
server implementation is used, restic prints errors similar to the following:
|
||||
|
||||
List() returned error: Truncated response should have continuation token set
|
||||
|
||||
As a temporary workaround, restic now allows using the older `ListObjects` endpoint by
|
||||
setting the `s3.list-objects-v1` extended option, for instance:
|
||||
|
||||
Restic -o s3.list-objects-v1=true snapshots
|
||||
|
||||
Please note that this option may be removed in future versions of restic.
|
||||
|
||||
https://github.com/restic/restic/issues/3083
|
||||
https://github.com/restic/restic/pull/3085
|
||||
|
||||
* Enhancement #3147: Support additional environment variables for Swift authentication
|
||||
|
||||
The `swift` backend now supports the following additional environment variables for passing
|
||||
authentication details to restic: `OS_USER_ID`, `OS_USER_DOMAIN_ID`,
|
||||
`OS_PROJECT_DOMAIN_ID` and `OS_TRUST_ID`
|
||||
|
||||
Depending on the `openrc` configuration file these might be required when the user and project
|
||||
domains differ from one another.
|
||||
|
||||
https://github.com/restic/restic/issues/3147
|
||||
https://github.com/restic/restic/pull/3158
|
||||
|
||||
* Enhancement #3191: Add release binaries for MIPS architectures
|
||||
|
||||
We've added a few new architectures for Linux to the release binaries: `mips`, `mipsle`,
|
||||
`mips64`, and `mip64le`. MIPS is mostly used for low-end embedded systems.
|
||||
|
||||
https://github.com/restic/restic/issues/3191
|
||||
https://github.com/restic/restic/pull/3208
|
||||
|
||||
* Enhancement #909: Back up mountpoints as empty directories
|
||||
|
||||
When the `--one-file-system` option is specified to `restic backup`, it ignores all file
|
||||
systems mounted below one of the target directories. This means that when a snapshot is
|
||||
restored, users needed to manually recreate the mountpoint directories.
|
||||
|
||||
Restic now backs up mountpoints as empty directories and therefore implements the same
|
||||
approach as `tar`.
|
||||
|
||||
https://github.com/restic/restic/issues/909
|
||||
https://github.com/restic/restic/pull/3119
|
||||
|
||||
* Enhancement #3250: Add several more error checks
|
||||
|
||||
We've added a lot more error checks in places where errors were previously ignored (as hinted by
|
||||
the static analysis program `errcheck` via `golangci-lint`).
|
||||
|
||||
https://github.com/restic/restic/pull/3250
|
||||
|
||||
* Enhancement #2718: Improve `prune` performance and make it more customizable
|
||||
|
||||
The `prune` command is now much faster. This is especially the case for remote repositories or
|
||||
repositories with not much data to remove. Also the memory usage of the `prune` command is now
|
||||
reduced.
|
||||
|
||||
Restic used to rebuild the index from scratch after pruning. This could lead to missing packs in
|
||||
the index in some cases for eventually consistent backends such as e.g. AWS S3. This behavior is
|
||||
now changed and the index rebuilding uses the information already known by `prune`.
|
||||
|
||||
By default, the `prune` command no longer removes all unused data. This behavior can be
|
||||
fine-tuned by new options, like the acceptable amount of unused space or the maximum size of
|
||||
data to reorganize. For more details, please see
|
||||
https://restic.readthedocs.io/en/stable/060_forget.html .
|
||||
|
||||
Moreover, `prune` now accepts the `--dry-run` option and also running `forget --dry-run
|
||||
--prune` will show what `prune` would do.
|
||||
|
||||
This enhancement also fixes several open issues, e.g.: -
|
||||
https://github.com/restic/restic/issues/1140 -
|
||||
https://github.com/restic/restic/issues/1599 -
|
||||
https://github.com/restic/restic/issues/1985 -
|
||||
https://github.com/restic/restic/issues/2112 -
|
||||
https://github.com/restic/restic/issues/2227 -
|
||||
https://github.com/restic/restic/issues/2305
|
||||
|
||||
https://github.com/restic/restic/pull/2718
|
||||
https://github.com/restic/restic/pull/2842
|
||||
|
||||
* Enhancement #2495: Add option to let `backup` trust mtime without checking ctime
|
||||
|
||||
The `backup` command used to require that both `ctime` and `mtime` of a file matched with a
|
||||
previously backed up version to determine that the file was unchanged. In other words, if
|
||||
either `ctime` or `mtime` of the file had changed, it would be considered changed and restic
|
||||
would read the file's content again to back up the relevant (changed) parts of it.
|
||||
|
||||
The new option `--ignore-ctime` makes restic look at `mtime` only, such that `ctime` changes
|
||||
for a file does not cause restic to read the file's contents again.
|
||||
|
||||
The check for both `ctime` and `mtime` was introduced in restic 0.9.6 to make backups more
|
||||
reliable in the face of programs that reset `mtime` (some Unix archivers do that), but it turned
|
||||
out to often be expensive because it made restic read file contents even if only the metadata
|
||||
(owner, permissions) of a file had changed. The new `--ignore-ctime` option lets the user
|
||||
restore the 0.9.5 behavior when needed. The existing `--ignore-inode` option already turned
|
||||
off this behavior, but also removed a different check.
|
||||
|
||||
Please note that changes in files' metadata are still recorded, regardless of the command line
|
||||
options provided to the backup command.
|
||||
|
||||
https://github.com/restic/restic/issues/2495
|
||||
https://github.com/restic/restic/issues/2558
|
||||
https://github.com/restic/restic/issues/2819
|
||||
https://github.com/restic/restic/pull/2823
|
||||
|
||||
* Enhancement #2941: Speed up the repacking step of the `prune` command
|
||||
|
||||
The repack step of the `prune` command, which moves still used file parts into new pack files
|
||||
such that the old ones can be garbage collected later on, now processes multiple pack files in
|
||||
parallel. This is especially beneficial for high latency backends or when using a fast network
|
||||
connection.
|
||||
|
||||
https://github.com/restic/restic/pull/2941
|
||||
|
||||
* Enhancement #3006: Speed up the `rebuild-index` command
|
||||
|
||||
We've optimized the `rebuild-index` command. Now, existing index entries are used to
|
||||
minimize the number of pack files that must be read. This speeds up the index rebuild a lot.
|
||||
|
||||
Additionally, the option `--read-all-packs` has been added, implementing the previous
|
||||
behavior.
|
||||
|
||||
https://github.com/restic/restic/pull/3006
|
||||
https://github.com/restic/restic/issue/2547
|
||||
|
||||
* Enhancement #3048: Add more checks for index and pack files in the `check` command
|
||||
|
||||
The `check` command run with the `--read-data` or `--read-data-subset` options used to only
|
||||
verify only the pack file content - it did not check if the blobs within the pack are correctly
|
||||
contained in the index.
|
||||
|
||||
A check for the latter is now in place, which can print the following error:
|
||||
|
||||
Blob ID is not contained in index or position is incorrect
|
||||
|
||||
Another test is also added, which compares pack file sizes computed from the index and the pack
|
||||
header with the actual file size. This test is able to detect truncated pack files.
|
||||
|
||||
If the index is not correct, it can be rebuilt by using the `rebuild-index` command.
|
||||
|
||||
Having added these tests, `restic check` is now able to detect non-existing blobs which are
|
||||
wrongly referenced in the index. This situation could have lead to missing data.
|
||||
|
||||
https://github.com/restic/restic/pull/3048
|
||||
https://github.com/restic/restic/pull/3082
|
||||
|
||||
* Enhancement #2433: Make the `dump` command support `zip` format
|
||||
|
||||
Previously, restic could dump the contents of a whole folder structure only in the `tar`
|
||||
format. The `dump` command now has a new flag to change output format to `zip`. Just pass
|
||||
`--archive zip` as an option to `restic dump`.
|
||||
|
||||
https://github.com/restic/restic/pull/2433
|
||||
https://github.com/restic/restic/pull/3081
|
||||
|
||||
* Enhancement #3099: Reduce memory usage of `check` command
|
||||
|
||||
The `check` command now requires less memory if it is run without the `--check-unused` option.
|
||||
|
||||
https://github.com/restic/restic/pull/3099
|
||||
|
||||
* Enhancement #3106: Parallelize scan of snapshot content in `copy` and `prune`
|
||||
|
||||
The `copy` and `prune` commands used to traverse the directories of snapshots one by one to find
|
||||
used data. This snapshot traversal is now parallized which can speed up this step several
|
||||
times.
|
||||
|
||||
In addition the `check` command now reports how many snapshots have already been processed.
|
||||
|
||||
https://github.com/restic/restic/pull/3106
|
||||
|
||||
* Enhancement #3130: Parallelize reading of locks and snapshots
|
||||
|
||||
Restic used to read snapshots sequentially. For repositories containing many snapshots this
|
||||
slowed down commands which have to read all snapshots.
|
||||
|
||||
Now the reading of snapshots is parallelized. This speeds up for example `prune`, `backup` and
|
||||
other commands that search for snapshots with certain properties or which have to find the
|
||||
`latest` snapshot.
|
||||
|
||||
The speed up also applies to locks stored in the backup repository.
|
||||
|
||||
https://github.com/restic/restic/pull/3130
|
||||
https://github.com/restic/restic/pull/3174
|
||||
|
||||
* Enhancement #3254: Enable HTTP/2 for backend connections
|
||||
|
||||
Go's HTTP library usually automatically chooses between HTTP/1.x and HTTP/2 depending on
|
||||
what the server supports. But for compatibility this mechanism is disabled if DialContext is
|
||||
used (which is the case for restic). This change allows restic's HTTP client to negotiate
|
||||
HTTP/2 if supported by the server.
|
||||
|
||||
https://github.com/restic/restic/pull/3254
|
||||
|
||||
|
||||
Changelog for restic 0.11.0 (2020-11-05)
|
||||
=======================================
|
||||
|
||||
|
||||
@@ -141,6 +141,14 @@ Installing the script `fmt-check` from https://github.com/edsrzf/gofmt-git-hook
|
||||
locally as a pre-commit hook checks formatting before committing automatically,
|
||||
just copy this script to `.git/hooks/pre-commit`.
|
||||
|
||||
The project is using the program
|
||||
[`golangci-lint`](https://github.com/golangci/golangci-lint) to run a list of
|
||||
linters and checkers. It will be run on the code when you submit a PR. In order
|
||||
to check your code beforehand, you can run `golangci-lint run` manually.
|
||||
Eventually, we will enable `golangci-lint` for the whole code base. For now,
|
||||
you can ignore warnings printed for lines you did not modify, those will be
|
||||
ignored by the CI.
|
||||
|
||||
For each pull request, several different systems run the integration tests on
|
||||
Linux, macOS and Windows. We won't merge any code that does not pass all tests
|
||||
for all systems, so when a tests fails, try to find out what's wrong and fix
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
[](https://restic.readthedocs.io/en/latest/?badge=latest)
|
||||
[](https://travis-ci.com/restic/restic)
|
||||
[](https://ci.appveyor.com/project/fd0/restic/branch/master)
|
||||
[](https://github.com/restic/restic/actions?query=workflow%3Atest)
|
||||
[](https://goreportcard.com/report/github.com/restic/restic)
|
||||
|
||||
# Introduction
|
||||
|
||||
32
appveyor.yml
32
appveyor.yml
@@ -1,32 +0,0 @@
|
||||
clone_folder: c:\restic
|
||||
|
||||
environment:
|
||||
GOPATH: c:\gopath
|
||||
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
|
||||
cache:
|
||||
- '%LocalAppData%\go-build'
|
||||
|
||||
init:
|
||||
- ps: >-
|
||||
$app = Get-WmiObject -Class Win32_Product -Filter "Vendor = 'http://golang.org'"
|
||||
|
||||
if ($app) {
|
||||
$app.Uninstall()
|
||||
}
|
||||
|
||||
install:
|
||||
- rmdir c:\go /s /q
|
||||
- appveyor DownloadFile https://dl.google.com/go/go1.15.2.windows-amd64.msi
|
||||
- msiexec /i go1.15.2.windows-amd64.msi /q
|
||||
- go version
|
||||
- go env
|
||||
- appveyor DownloadFile https://sourceforge.netcologne.de/project/gnuwin32/tar/1.13-1/tar-1.13-1-bin.zip -FileName tar.zip
|
||||
- 7z x tar.zip bin/tar.exe
|
||||
- set PATH=bin/;%PATH%
|
||||
|
||||
build_script:
|
||||
- go run run_integration_tests.go
|
||||
10
changelog/0.12.0_2021-02-14/issue-1681
Normal file
10
changelog/0.12.0_2021-02-14/issue-1681
Normal file
@@ -0,0 +1,10 @@
|
||||
Bugfix: Make `mount` not create missing mount point directory
|
||||
|
||||
When specifying a non-existent directory as mount point for the `mount`
|
||||
command, restic used to create the specified directory automatically.
|
||||
|
||||
This has now changed such that restic instead gives an error when the
|
||||
specified directory for the mount point does not exist.
|
||||
|
||||
https://github.com/restic/restic/issues/1681
|
||||
https://github.com/restic/restic/pull/3008
|
||||
8
changelog/0.12.0_2021-02-14/issue-1800
Normal file
8
changelog/0.12.0_2021-02-14/issue-1800
Normal file
@@ -0,0 +1,8 @@
|
||||
Bugfix: Ignore `no data available` filesystem error during backup
|
||||
|
||||
Restic was unable to backup files on some filesystems, for example certain
|
||||
configurations of CIFS on Linux which return a `no data available` error
|
||||
when reading extended attributes. These errors are now ignored.
|
||||
|
||||
https://github.com/restic/restic/issues/1800
|
||||
https://github.com/restic/restic/pull/3034
|
||||
8
changelog/0.12.0_2021-02-14/issue-2186
Normal file
8
changelog/0.12.0_2021-02-14/issue-2186
Normal file
@@ -0,0 +1,8 @@
|
||||
Enhancement: Allow specifying percentage in `check --read-data-subset`
|
||||
|
||||
We've enhanced the `check` command's `--read-data-subset` option to also accept
|
||||
a percentage (e.g. `2.5%` or `10%`). This will check the given percentage of
|
||||
pack files (which are randomly selected on each run).
|
||||
|
||||
https://github.com/restic/restic/issues/2186
|
||||
https://github.com/restic/restic/pull/3038
|
||||
14
changelog/0.12.0_2021-02-14/issue-2453
Normal file
14
changelog/0.12.0_2021-02-14/issue-2453
Normal file
@@ -0,0 +1,14 @@
|
||||
Enhancement: Report permanent/fatal backend errors earlier
|
||||
|
||||
When encountering errors in reading from or writing to storage backends,
|
||||
restic retries the failing operation up to nine times (for a total of ten
|
||||
attempts). It used to retry all backend operations, but now detects some
|
||||
permanent error conditions so that it can report fatal errors earlier.
|
||||
|
||||
Permanent failures include local disks being full, SSH connections
|
||||
dropping and permission errors.
|
||||
|
||||
https://github.com/restic/restic/issues/2453
|
||||
https://github.com/restic/restic/pull/3170
|
||||
https://github.com/restic/restic/issues/3180
|
||||
https://github.com/restic/restic/pull/3181
|
||||
21
changelog/0.12.0_2021-02-14/issue-2528
Normal file
21
changelog/0.12.0_2021-02-14/issue-2528
Normal file
@@ -0,0 +1,21 @@
|
||||
Enhancement: Add Alibaba/Aliyun OSS support in the `s3` backend
|
||||
|
||||
A new extended option `s3.bucket-lookup` has been added to support
|
||||
Alibaba/Aliyun OSS in the `s3` backend. The option can be set to one
|
||||
of the following values:
|
||||
|
||||
- `auto` - Existing behaviour
|
||||
- `dns` - Use DNS style bucket access
|
||||
- `path` - Use path style bucket access
|
||||
|
||||
To make the `s3` backend work with Alibaba/Aliyun OSS you must set
|
||||
`s3.bucket-lookup` to `dns` and set the `s3.region` parameter. For
|
||||
example:
|
||||
|
||||
restic -o s3.bucket-lookup=dns -o s3.region=oss-eu-west-1 -r s3:https://oss-eu-west-1.aliyuncs.com/bucketname init
|
||||
|
||||
Note that `s3.region` must be set, otherwise the MinIO SDK tries to
|
||||
look it up and it seems that Alibaba doesn't support that properly.
|
||||
|
||||
https://github.com/restic/restic/issues/2528
|
||||
https://github.com/restic/restic/pull/2535
|
||||
9
changelog/0.12.0_2021-02-14/issue-2563
Normal file
9
changelog/0.12.0_2021-02-14/issue-2563
Normal file
@@ -0,0 +1,9 @@
|
||||
Bugfix: Report the correct owner of directories in FUSE mounts
|
||||
|
||||
Restic 0.10.0 changed the FUSE mount to always report the current user
|
||||
as the owner of directories within the FUSE mount, which is incorrect.
|
||||
|
||||
This is now changed back to reporting the correct owner of a directory.
|
||||
|
||||
https://github.com/restic/restic/issues/2563
|
||||
https://github.com/restic/restic/pull/3141
|
||||
31
changelog/0.12.0_2021-02-14/issue-2688
Normal file
31
changelog/0.12.0_2021-02-14/issue-2688
Normal file
@@ -0,0 +1,31 @@
|
||||
Bugfix: Make `backup` and `tag` commands separate tags by comma
|
||||
|
||||
Running `restic backup --tag foo,bar` previously created snapshots with one
|
||||
single tag containing a comma (`foo,bar`) instead of two tags (`foo`, `bar`).
|
||||
|
||||
Similarly, the `tag` command's `--set`, `--add` and `--remove` options would
|
||||
treat `foo,bar` as one tag instead of two tags. This was inconsistent with
|
||||
other commands and often unexpected when one intended `foo,bar` to mean two
|
||||
tags.
|
||||
|
||||
To be consistent in all commands, restic now interprets `foo,bar` to mean two
|
||||
separate tags (`foo` and `bar`) instead of one tag (`foo,bar`) everywhere,
|
||||
including in the `backup` and `tag` commands.
|
||||
|
||||
NOTE: This change might result in unexpected behavior in cases where you use
|
||||
the `forget` command and filter on tags like `foo,bar`. Snapshots previously
|
||||
backed up with `--tag foo,bar` will still not match that filter, but snapshots
|
||||
saved from now on will match that filter.
|
||||
|
||||
To replace `foo,bar` tags with `foo` and `bar` tags in old snapshots, you can
|
||||
first generate a list of the relevant snapshots using a command like:
|
||||
|
||||
restic snapshots --json --quiet | jq '.[] | select(contains({tags: ["foo,bar"]})) | .id'
|
||||
|
||||
and then use `restic tag --set foo --set bar snapshotID [...]` to set the new
|
||||
tags. Please adjust the commands to include real tag names and any additional
|
||||
tags, as well as the list of snapshots to process.
|
||||
|
||||
https://github.com/restic/restic/issues/2688
|
||||
https://github.com/restic/restic/pull/2690
|
||||
https://github.com/restic/restic/pull/3197
|
||||
17
changelog/0.12.0_2021-02-14/issue-2706
Normal file
17
changelog/0.12.0_2021-02-14/issue-2706
Normal file
@@ -0,0 +1,17 @@
|
||||
Enhancement: Configurable progress reports for non-interactive terminals
|
||||
|
||||
The `backup`, `check` and `prune` commands never printed any progress
|
||||
reports on non-interactive terminals. This behavior is now configurable
|
||||
using the `RESTIC_PROGRESS_FPS` environment variable. Use for example a
|
||||
value of `1` for an update every second, or `0.01666` for an update every
|
||||
minute.
|
||||
|
||||
The `backup` command now also prints the current progress when restic
|
||||
receives a `SIGUSR1` signal.
|
||||
|
||||
Setting the `RESTIC_PROGRESS_FPS` environment variable or sending a `SIGUSR1`
|
||||
signal prints a status report even when `--quiet` was specified.
|
||||
|
||||
https://github.com/restic/restic/issues/2706
|
||||
https://github.com/restic/restic/issues/3194
|
||||
https://github.com/restic/restic/pull/3199
|
||||
5
changelog/0.12.0_2021-02-14/issue-2739
Normal file
5
changelog/0.12.0_2021-02-14/issue-2739
Normal file
@@ -0,0 +1,5 @@
|
||||
Bugfix: Make the `cat` command respect the `--no-lock` option
|
||||
|
||||
The `cat` command would not respect the `--no-lock` flag. This is now fixed.
|
||||
|
||||
https://github.com/restic/restic/issues/2739
|
||||
13
changelog/0.12.0_2021-02-14/issue-2944
Normal file
13
changelog/0.12.0_2021-02-14/issue-2944
Normal file
@@ -0,0 +1,13 @@
|
||||
Enhancement: Add `backup` options `--files-from-{verbatim,raw}`
|
||||
|
||||
The new `backup` options `--files-from-verbatim` and `--files-from-raw` read a
|
||||
list of files to back up from a file. Unlike the existing `--files-from`
|
||||
option, these options do not interpret the listed filenames as glob patterns;
|
||||
instead, whitespace in filenames is preserved as-is and no pattern expansion is
|
||||
done. Please see the documentation for specifics.
|
||||
|
||||
These new options are highly recommended over `--files-from`, when using a
|
||||
script to generate the list of files to back up.
|
||||
|
||||
https://github.com/restic/restic/issues/2944
|
||||
https://github.com/restic/restic/issues/3013
|
||||
18
changelog/0.12.0_2021-02-14/issue-3083
Normal file
18
changelog/0.12.0_2021-02-14/issue-3083
Normal file
@@ -0,0 +1,18 @@
|
||||
Enhancement: Allow usage of deprecated S3 `ListObjects` API
|
||||
|
||||
Some S3 API implementations, e.g. Ceph before version 14.2.5, have a broken
|
||||
`ListObjectsV2` implementation which causes problems for restic when using
|
||||
their API endpoints. When a broken server implementation is used, restic prints
|
||||
errors similar to the following:
|
||||
|
||||
List() returned error: Truncated response should have continuation token set
|
||||
|
||||
As a temporary workaround, restic now allows using the older `ListObjects`
|
||||
endpoint by setting the `s3.list-objects-v1` extended option, for instance:
|
||||
|
||||
restic -o s3.list-objects-v1=true snapshots
|
||||
|
||||
Please note that this option may be removed in future versions of restic.
|
||||
|
||||
https://github.com/restic/restic/issues/3083
|
||||
https://github.com/restic/restic/pull/3085
|
||||
10
changelog/0.12.0_2021-02-14/issue-3090
Normal file
10
changelog/0.12.0_2021-02-14/issue-3090
Normal file
@@ -0,0 +1,10 @@
|
||||
Bugfix: The `--use-fs-snapshot` option now works on windows/386
|
||||
|
||||
Restic failed to create VSS snapshots on windows/386 with the following error:
|
||||
|
||||
GetSnapshotProperties() failed: E_INVALIDARG (0x80070057)
|
||||
|
||||
This is now fixed.
|
||||
|
||||
https://github.com/restic/restic/issues/3087
|
||||
https://github.com/restic/restic/pull/3090
|
||||
17
changelog/0.12.0_2021-02-14/issue-3095
Normal file
17
changelog/0.12.0_2021-02-14/issue-3095
Normal file
@@ -0,0 +1,17 @@
|
||||
Change: Deleting files on Google Drive now moves them to the trash
|
||||
|
||||
When deleting files on Google Drive via the `rclone` backend, restic used to
|
||||
bypass the trash folder required that one used the `-o rclone.args` option to
|
||||
enable usage of the trash folder. This ensured that deleted files in Google
|
||||
Drive were not kept indefinitely in the trash folder. However, since Google
|
||||
Drive's trash retention policy changed to deleting trashed files after 30 days,
|
||||
this is no longer needed.
|
||||
|
||||
Restic now leaves it up to rclone and its configuration to use or not use the
|
||||
trash folder when deleting files. The default is to use the trash folder, as
|
||||
of rclone 1.53.2. To re-enable the restic 0.11 behavior, set the
|
||||
`RCLONE_DRIVE_USE_TRASH` environment variable or change the rclone
|
||||
configuration. See the rclone documentation for more details.
|
||||
|
||||
https://github.com/restic/restic/issues/3095
|
||||
https://github.com/restic/restic/pull/3102
|
||||
10
changelog/0.12.0_2021-02-14/issue-3100
Normal file
10
changelog/0.12.0_2021-02-14/issue-3100
Normal file
@@ -0,0 +1,10 @@
|
||||
Bugfix: Do not require gs bucket permissions when running `init`
|
||||
|
||||
Restic used to require bucket level permissions for the `gs` backend
|
||||
in order to initialize a restic repository.
|
||||
|
||||
It now allows a `gs` service account to initialize a repository if the
|
||||
bucket does exist and the service account has permissions to write/read
|
||||
to that bucket.
|
||||
|
||||
https://github.com/restic/restic/issues/3100
|
||||
9
changelog/0.12.0_2021-02-14/issue-3111
Normal file
9
changelog/0.12.0_2021-02-14/issue-3111
Normal file
@@ -0,0 +1,9 @@
|
||||
Bugfix: Correctly detect output redirection for `backup` command on Windows
|
||||
|
||||
On Windows, since restic 0.10.0 the `backup` command did not properly detect
|
||||
when the output was redirected to a file. This caused restic to output
|
||||
terminal control characters. This has been fixed by correcting the terminal
|
||||
detection.
|
||||
|
||||
https://github.com/restic/restic/issues/3111
|
||||
https://github.com/restic/restic/pull/3150
|
||||
11
changelog/0.12.0_2021-02-14/issue-3147
Normal file
11
changelog/0.12.0_2021-02-14/issue-3147
Normal file
@@ -0,0 +1,11 @@
|
||||
Enhancement: Support additional environment variables for Swift authentication
|
||||
|
||||
The `swift` backend now supports the following additional environment variables
|
||||
for passing authentication details to restic:
|
||||
`OS_USER_ID`, `OS_USER_DOMAIN_ID`, `OS_PROJECT_DOMAIN_ID` and `OS_TRUST_ID`
|
||||
|
||||
Depending on the `openrc` configuration file these might be required when the
|
||||
user and project domains differ from one another.
|
||||
|
||||
https://github.com/restic/restic/issues/3147
|
||||
https://github.com/restic/restic/pull/3158
|
||||
9
changelog/0.12.0_2021-02-14/issue-3151
Normal file
9
changelog/0.12.0_2021-02-14/issue-3151
Normal file
@@ -0,0 +1,9 @@
|
||||
Bugfix: Don't create invalid snapshots when `backup` is interrupted
|
||||
|
||||
When canceling a backup run at a certain moment it was possible that
|
||||
restic created a snapshot with an invalid "null" tree. This caused
|
||||
`check` and other operations to fail. The `backup` command now properly
|
||||
handles interruptions and never saves a snapshot when interrupted.
|
||||
|
||||
https://github.com/restic/restic/issues/3151
|
||||
https://github.com/restic/restic/pull/3164
|
||||
9
changelog/0.12.0_2021-02-14/issue-3166
Normal file
9
changelog/0.12.0_2021-02-14/issue-3166
Normal file
@@ -0,0 +1,9 @@
|
||||
Bugfix: Improve error handling in the `restore` command
|
||||
|
||||
The `restore` command used to not print errors while downloading file contents
|
||||
from the repository. It also incorrectly exited with a zero error code even
|
||||
when there were errors during the restore process. This has all been fixed and
|
||||
`restore` now returns with a non-zero exit code when there's an error.
|
||||
|
||||
https://github.com/restic/restic/issues/3166
|
||||
https://github.com/restic/restic/pull/3207
|
||||
8
changelog/0.12.0_2021-02-14/issue-3191
Normal file
8
changelog/0.12.0_2021-02-14/issue-3191
Normal file
@@ -0,0 +1,8 @@
|
||||
Enhancement: Add release binaries for MIPS architectures
|
||||
|
||||
We've added a few new architectures for Linux to the release binaries: `mips`,
|
||||
`mipsle`, `mips64`, and `mip64le`. MIPS is mostly used for low-end embedded
|
||||
systems.
|
||||
|
||||
https://github.com/restic/restic/issues/3191
|
||||
https://github.com/restic/restic/pull/3208
|
||||
11
changelog/0.12.0_2021-02-14/issue-3232
Normal file
11
changelog/0.12.0_2021-02-14/issue-3232
Normal file
@@ -0,0 +1,11 @@
|
||||
Bugfix: Correct statistics for overlapping targets
|
||||
|
||||
A user reported that restic's statistics and progress information during backup
|
||||
was not correctly calculated when the backup targets (files/dirs to save)
|
||||
overlap. For example, consider a directory `foo` which contains (among others)
|
||||
a file `foo/bar`. When `restic backup foo foo/bar` was run, restic counted the
|
||||
size of the file `foo/bar` twice, so the completeness percentage as well as the
|
||||
number of files was wrong. This is now corrected.
|
||||
|
||||
https://github.com/restic/restic/issues/3232
|
||||
https://github.com/restic/restic/pull/3243
|
||||
12
changelog/0.12.0_2021-02-14/issue-909
Normal file
12
changelog/0.12.0_2021-02-14/issue-909
Normal file
@@ -0,0 +1,12 @@
|
||||
Enhancement: Back up mountpoints as empty directories
|
||||
|
||||
When the `--one-file-system` option is specified to `restic backup`, it
|
||||
ignores all file systems mounted below one of the target directories. This
|
||||
means that when a snapshot is restored, users needed to manually recreate
|
||||
the mountpoint directories.
|
||||
|
||||
Restic now backs up mountpoints as empty directories and therefore implements
|
||||
the same approach as `tar`.
|
||||
|
||||
https://github.com/restic/restic/issues/909
|
||||
https://github.com/restic/restic/pull/3119
|
||||
6
changelog/0.12.0_2021-02-14/pr-3250
Normal file
6
changelog/0.12.0_2021-02-14/pr-3250
Normal file
@@ -0,0 +1,6 @@
|
||||
Enhancement: Add several more error checks
|
||||
|
||||
We've added a lot more error checks in places where errors were previously
|
||||
ignored (as hinted by the static analysis program `errcheck` via `golangci-lint`).
|
||||
|
||||
https://github.com/restic/restic/pull/3250
|
||||
29
changelog/0.12.0_2021-02-14/pull-2718
Normal file
29
changelog/0.12.0_2021-02-14/pull-2718
Normal file
@@ -0,0 +1,29 @@
|
||||
Enhancement: Improve `prune` performance and make it more customizable
|
||||
|
||||
The `prune` command is now much faster. This is especially the case for remote
|
||||
repositories or repositories with not much data to remove. Also the memory
|
||||
usage of the `prune` command is now reduced.
|
||||
|
||||
Restic used to rebuild the index from scratch after pruning. This could lead
|
||||
to missing packs in the index in some cases for eventually consistent backends
|
||||
such as e.g. AWS S3. This behavior is now changed and the index rebuilding
|
||||
uses the information already known by `prune`.
|
||||
|
||||
By default, the `prune` command no longer removes all unused data. This
|
||||
behavior can be fine-tuned by new options, like the acceptable amount of
|
||||
unused space or the maximum size of data to reorganize. For more details,
|
||||
please see https://restic.readthedocs.io/en/stable/060_forget.html .
|
||||
|
||||
Moreover, `prune` now accepts the `--dry-run` option and also running
|
||||
`forget --dry-run --prune` will show what `prune` would do.
|
||||
|
||||
This enhancement also fixes several open issues, e.g.:
|
||||
- https://github.com/restic/restic/issues/1140
|
||||
- https://github.com/restic/restic/issues/1599
|
||||
- https://github.com/restic/restic/issues/1985
|
||||
- https://github.com/restic/restic/issues/2112
|
||||
- https://github.com/restic/restic/issues/2227
|
||||
- https://github.com/restic/restic/issues/2305
|
||||
|
||||
https://github.com/restic/restic/pull/2718
|
||||
https://github.com/restic/restic/pull/2842
|
||||
27
changelog/0.12.0_2021-02-14/pull-2823
Normal file
27
changelog/0.12.0_2021-02-14/pull-2823
Normal file
@@ -0,0 +1,27 @@
|
||||
Enhancement: Add option to let `backup` trust mtime without checking ctime
|
||||
|
||||
The `backup` command used to require that both `ctime` and `mtime` of a file
|
||||
matched with a previously backed up version to determine that the file was
|
||||
unchanged. In other words, if either `ctime` or `mtime` of the file had
|
||||
changed, it would be considered changed and restic would read the file's
|
||||
content again to back up the relevant (changed) parts of it.
|
||||
|
||||
The new option `--ignore-ctime` makes restic look at `mtime` only, such that
|
||||
`ctime` changes for a file does not cause restic to read the file's contents
|
||||
again.
|
||||
|
||||
The check for both `ctime` and `mtime` was introduced in restic 0.9.6 to make
|
||||
backups more reliable in the face of programs that reset `mtime` (some Unix
|
||||
archivers do that), but it turned out to often be expensive because it made
|
||||
restic read file contents even if only the metadata (owner, permissions) of
|
||||
a file had changed. The new `--ignore-ctime` option lets the user restore the
|
||||
0.9.5 behavior when needed. The existing `--ignore-inode` option already
|
||||
turned off this behavior, but also removed a different check.
|
||||
|
||||
Please note that changes in files' metadata are still recorded, regardless of
|
||||
the command line options provided to the backup command.
|
||||
|
||||
https://github.com/restic/restic/issues/2495
|
||||
https://github.com/restic/restic/issues/2558
|
||||
https://github.com/restic/restic/issues/2819
|
||||
https://github.com/restic/restic/pull/2823
|
||||
8
changelog/0.12.0_2021-02-14/pull-2941
Normal file
8
changelog/0.12.0_2021-02-14/pull-2941
Normal file
@@ -0,0 +1,8 @@
|
||||
Enhancement: Speed up the repacking step of the `prune` command
|
||||
|
||||
The repack step of the `prune` command, which moves still used file parts into
|
||||
new pack files such that the old ones can be garbage collected later on, now
|
||||
processes multiple pack files in parallel. This is especially beneficial for
|
||||
high latency backends or when using a fast network connection.
|
||||
|
||||
https://github.com/restic/restic/pull/2941
|
||||
11
changelog/0.12.0_2021-02-14/pull-3006
Normal file
11
changelog/0.12.0_2021-02-14/pull-3006
Normal file
@@ -0,0 +1,11 @@
|
||||
Enhancement: Speed up the `rebuild-index` command
|
||||
|
||||
We've optimized the `rebuild-index` command. Now, existing index entries are used
|
||||
to minimize the number of pack files that must be read. This speeds up the index
|
||||
rebuild a lot.
|
||||
|
||||
Additionally, the option `--read-all-packs` has been added, implementing the
|
||||
previous behavior.
|
||||
|
||||
https://github.com/restic/restic/issue/2547
|
||||
https://github.com/restic/restic/pull/3006
|
||||
13
changelog/0.12.0_2021-02-14/pull-3014
Normal file
13
changelog/0.12.0_2021-02-14/pull-3014
Normal file
@@ -0,0 +1,13 @@
|
||||
Bugfix: Fix sporadic stream reset between rclone and restic
|
||||
|
||||
Sometimes when using restic with the `rclone` backend, an error message
|
||||
similar to the following would be printed:
|
||||
|
||||
Didn't finish writing GET request (wrote 0/xxx): http2: stream closed
|
||||
|
||||
It was found that this was caused by restic closing the connection to rclone
|
||||
to soon when downloading data. A workaround has been added which waits for
|
||||
the end of the download before closing the connection.
|
||||
|
||||
https://github.com/restic/restic/pull/3014
|
||||
https://github.com/rclone/rclone/issues/2598
|
||||
23
changelog/0.12.0_2021-02-14/pull-3048
Normal file
23
changelog/0.12.0_2021-02-14/pull-3048
Normal file
@@ -0,0 +1,23 @@
|
||||
Enhancement: Add more checks for index and pack files in the `check` command
|
||||
|
||||
The `check` command run with the `--read-data` or `--read-data-subset` options
|
||||
used to only verify only the pack file content - it did not check if the blobs
|
||||
within the pack are correctly contained in the index.
|
||||
|
||||
A check for the latter is now in place, which can print the following error:
|
||||
|
||||
Blob ID is not contained in index or position is incorrect
|
||||
|
||||
Another test is also added, which compares pack file sizes computed from the
|
||||
index and the pack header with the actual file size. This test is able to
|
||||
detect truncated pack files.
|
||||
|
||||
If the index is not correct, it can be rebuilt by using the `rebuild-index`
|
||||
command.
|
||||
|
||||
Having added these tests, `restic check` is now able to detect non-existing
|
||||
blobs which are wrongly referenced in the index. This situation could have
|
||||
lead to missing data.
|
||||
|
||||
https://github.com/restic/restic/pull/3048
|
||||
https://github.com/restic/restic/pull/3082
|
||||
8
changelog/0.12.0_2021-02-14/pull-3081
Normal file
8
changelog/0.12.0_2021-02-14/pull-3081
Normal file
@@ -0,0 +1,8 @@
|
||||
Enhancement: Make the `dump` command support `zip` format
|
||||
|
||||
Previously, restic could dump the contents of a whole folder structure only
|
||||
in the `tar` format. The `dump` command now has a new flag to change output
|
||||
format to `zip`. Just pass `--archive zip` as an option to `restic dump`.
|
||||
|
||||
https://github.com/restic/restic/pull/2433
|
||||
https://github.com/restic/restic/pull/3081
|
||||
6
changelog/0.12.0_2021-02-14/pull-3099
Normal file
6
changelog/0.12.0_2021-02-14/pull-3099
Normal file
@@ -0,0 +1,6 @@
|
||||
Enhancement: Reduce memory usage of `check` command
|
||||
|
||||
The `check` command now requires less memory if it is run without the
|
||||
`--check-unused` option.
|
||||
|
||||
https://github.com/restic/restic/pull/3099
|
||||
10
changelog/0.12.0_2021-02-14/pull-3106
Normal file
10
changelog/0.12.0_2021-02-14/pull-3106
Normal file
@@ -0,0 +1,10 @@
|
||||
Enhancement: Parallelize scan of snapshot content in `copy` and `prune`
|
||||
|
||||
The `copy` and `prune` commands used to traverse the directories of
|
||||
snapshots one by one to find used data. This snapshot traversal is
|
||||
now parallized which can speed up this step several times.
|
||||
|
||||
In addition the `check` command now reports how many snapshots have
|
||||
already been processed.
|
||||
|
||||
https://github.com/restic/restic/pull/3106
|
||||
13
changelog/0.12.0_2021-02-14/pull-3130
Normal file
13
changelog/0.12.0_2021-02-14/pull-3130
Normal file
@@ -0,0 +1,13 @@
|
||||
Enhancement: Parallelize reading of locks and snapshots
|
||||
|
||||
Restic used to read snapshots sequentially. For repositories containing
|
||||
many snapshots this slowed down commands which have to read all snapshots.
|
||||
|
||||
Now the reading of snapshots is parallelized. This speeds up for example
|
||||
`prune`, `backup` and other commands that search for snapshots with certain
|
||||
properties or which have to find the `latest` snapshot.
|
||||
|
||||
The speed up also applies to locks stored in the backup repository.
|
||||
|
||||
https://github.com/restic/restic/pull/3130
|
||||
https://github.com/restic/restic/pull/3174
|
||||
8
changelog/0.12.0_2021-02-14/pull-3152
Normal file
8
changelog/0.12.0_2021-02-14/pull-3152
Normal file
@@ -0,0 +1,8 @@
|
||||
Bugfix: Do not hang until foregrounded when completed in background
|
||||
|
||||
On Linux, when running in the background restic failed to stop the terminal
|
||||
output of the `backup` command after it had completed. This caused restic to
|
||||
hang until moved to the foreground. This has now been fixed.
|
||||
|
||||
https://github.com/restic/restic/pull/3152
|
||||
https://forum.restic.net/t/restic-alpine-container-cron-hangs-epoll-pwait/3334
|
||||
7
changelog/0.12.0_2021-02-14/pull-3249
Normal file
7
changelog/0.12.0_2021-02-14/pull-3249
Normal file
@@ -0,0 +1,7 @@
|
||||
Bugfix: Improve error handling in `gs` backend
|
||||
|
||||
The `gs` backend did not notice when the last step of completing a
|
||||
file upload failed. Under rare circumstances, this could cause
|
||||
missing files in the backup repository. This has now been fixed.
|
||||
|
||||
https://github.com/restic/restic/pull/3249
|
||||
8
changelog/0.12.0_2021-02-14/pull-3254
Normal file
8
changelog/0.12.0_2021-02-14/pull-3254
Normal file
@@ -0,0 +1,8 @@
|
||||
Enhancement: Enable HTTP/2 for backend connections
|
||||
|
||||
Go's HTTP library usually automatically chooses between HTTP/1.x and HTTP/2
|
||||
depending on what the server supports. But for compatibility this mechanism
|
||||
is disabled if DialContext is used (which is the case for restic). This change
|
||||
allows restic's HTTP client to negotiate HTTP/2 if supported by the server.
|
||||
|
||||
https://github.com/restic/restic/pull/3254
|
||||
@@ -1,11 +0,0 @@
|
||||
Bugfix: Restore timestamps and permissions on intermediate directories
|
||||
|
||||
When using the `--include` option of the restore command, restic restored
|
||||
timestamps and permissions only on directories selected by the include pattern.
|
||||
Intermediate directories, which are necessary to restore files located in sub-
|
||||
directories, were created with default permissions. We've fixed the restore
|
||||
command to restore timestamps and permissions for these directories as well.
|
||||
|
||||
https://github.com/restic/restic/issues/1212
|
||||
https://github.com/restic/restic/issues/1402
|
||||
https://github.com/restic/restic/pull/2906
|
||||
@@ -1,15 +0,0 @@
|
||||
Bugfix: Mark repository files as read-only when using the local backend
|
||||
|
||||
Files stored in a local repository were marked as writeable on the
|
||||
filesystem for non-Windows systems, which did not prevent accidental file
|
||||
modifications outside of restic. In addition, the local backend did not work
|
||||
with certain filesystems and network mounts which do not permit modifications
|
||||
of file permissions.
|
||||
|
||||
restic now marks files stored in a local repository as read-only on the
|
||||
filesystem on non-Windows systems. The error handling is improved to support
|
||||
more filesystems.
|
||||
|
||||
https://github.com/restic/restic/issues/1756
|
||||
https://github.com/restic/restic/issues/2157
|
||||
https://github.com/restic/restic/pull/2989
|
||||
@@ -1,10 +0,0 @@
|
||||
Bugfix: Hide password in REST backend repository URLs
|
||||
|
||||
When using a password in the REST backend repository URL,
|
||||
the password could in some cases be included in the output
|
||||
from restic, e.g. when initializing a repo or during an error.
|
||||
|
||||
The password is now replaced with "***" where applicable.
|
||||
|
||||
https://github.com/restic/restic/issues/2241
|
||||
https://github.com/restic/restic/pull/2658
|
||||
@@ -1,12 +0,0 @@
|
||||
Bugfix: Correctly dump directories into tar files
|
||||
|
||||
The dump command previously wrote directories in a tar file in a way which
|
||||
can cause compatibility problems. This caused, for example, 7zip on Windows
|
||||
to not open tar files containing directories. In addition it was not possible
|
||||
to dump directories with extended attributes. These compatibility problems
|
||||
are now corrected.
|
||||
|
||||
In addition, a tar file now includes the name of the owner and group of a file.
|
||||
|
||||
https://github.com/restic/restic/issues/2319
|
||||
https://github.com/restic/restic/pull/3039
|
||||
@@ -1,9 +0,0 @@
|
||||
Bugfix: Don't require `self-update --output` placeholder file
|
||||
|
||||
`restic self-update --output /path/to/new-restic` used to require that
|
||||
new-restic was an existing file, to be overwritten. Now it's possible
|
||||
to download an updated restic binary to a new path, without first
|
||||
having to create a placeholder file.
|
||||
|
||||
https://github.com/restic/restic/issues/2491
|
||||
https://github.com/restic/restic/pull/2937
|
||||
@@ -1,7 +0,0 @@
|
||||
Bugfix: Fix rare cases of backup command hanging forever
|
||||
|
||||
We've fixed an issue with the backup progress reporting which could cause
|
||||
restic to hang forever right before finishing a backup.
|
||||
|
||||
https://github.com/restic/restic/issues/2834
|
||||
https://github.com/restic/restic/pull/2963
|
||||
@@ -1,6 +0,0 @@
|
||||
Bugfix: Fix manpage formatting
|
||||
|
||||
The manpage formatting in restic v0.10.0 was garbled, which is fixed now.
|
||||
|
||||
https://github.com/restic/restic/issues/2938
|
||||
https://github.com/restic/restic/pull/2977
|
||||
@@ -1,7 +0,0 @@
|
||||
Bugfix: Make --exclude-larger-than handle disappearing files
|
||||
|
||||
There was a small bug in the backup command's --exclude-larger-than
|
||||
option where files that disappeared between scanning and actually
|
||||
backing them up to the repository caused a panic. This is now fixed.
|
||||
|
||||
https://github.com/restic/restic/issues/2942
|
||||
@@ -1,9 +0,0 @@
|
||||
Bugfix: restic generate, help and self-update no longer check passwords
|
||||
|
||||
The commands `restic cache`, `generate`, `help` and `self-update` don't need
|
||||
passwords, but they previously did run the RESTIC_PASSWORD_COMMAND (if set in
|
||||
the environment), prompting users to authenticate for no reason. They now skip
|
||||
running the password command.
|
||||
|
||||
https://github.com/restic/restic/issues/2951
|
||||
https://github.com/restic/restic/pull/2987
|
||||
@@ -1,9 +0,0 @@
|
||||
Enhancement: Optimize check for unchanged files during backup
|
||||
|
||||
During a backup restic skips processing files which have not changed since the last backup run.
|
||||
Previously this required opening each file once which can be slow on network filesystems. The
|
||||
backup command now checks for file changes before opening a file. This considerably reduces
|
||||
the time to create a backup on network filesystems.
|
||||
|
||||
https://github.com/restic/restic/issues/2969
|
||||
https://github.com/restic/restic/pull/2970
|
||||
@@ -1,9 +0,0 @@
|
||||
Bugfix: Make snapshots --json output [] instead of null when no snapshots
|
||||
|
||||
Restic previously output `null` instead of `[]` for the `--json snapshots`
|
||||
command, when there were no snapshots in the repository. This caused some
|
||||
minor problems when parsing the output, but is now fixed such that `[]` is
|
||||
output when the list of snapshots is empty.
|
||||
|
||||
https://github.com/restic/restic/issues/2979
|
||||
https://github.com/restic/restic/pull/2984
|
||||
@@ -1,12 +0,0 @@
|
||||
Enhancement: Add support for Volume Shadow Copy Service (VSS) on Windows
|
||||
|
||||
Volume Shadow Copy Service allows read access to files that are locked by
|
||||
another process using an exclusive lock through a filesystem snapshot. Restic
|
||||
was unable to backup those files before. This update enables backing up these
|
||||
files.
|
||||
|
||||
This needs to be enabled explicitely using the --use-fs-snapshot option of the
|
||||
backup command.
|
||||
|
||||
https://github.com/restic/restic/issues/340
|
||||
https://github.com/restic/restic/pull/2274
|
||||
@@ -1,7 +0,0 @@
|
||||
Enhancement: Authenticate to Google Cloud Storage with access token
|
||||
|
||||
When using the GCS backend, it is now possible to authenticate with OAuth2
|
||||
access tokens instead of a credentials file by setting the GOOGLE_ACCESS_TOKEN
|
||||
environment variable.
|
||||
|
||||
https://github.com/restic/restic/pull/2849
|
||||
@@ -1,10 +0,0 @@
|
||||
Enhancement: New option --repository-file
|
||||
|
||||
We've added a new command-line option --repository-file as an alternative
|
||||
to -r. This allows to read the repository URL from a file in order to
|
||||
prevent certain types of information leaks, especially for URLs containing
|
||||
credentials.
|
||||
|
||||
https://github.com/restic/restic/issues/1458
|
||||
https://github.com/restic/restic/issues/2900
|
||||
https://github.com/restic/restic/pull/2910
|
||||
@@ -1,8 +0,0 @@
|
||||
Enhancement: Warn if parent snapshot cannot be loaded during backup
|
||||
|
||||
During a backup restic uses the parent snapshot to check whether a file was
|
||||
changed and has to be backed up again. For this check the backup has to read
|
||||
the directories contained in the old snapshot. If a tree blob cannot be
|
||||
loaded, restic now warns about this problem with the backup repository.
|
||||
|
||||
https://github.com/restic/restic/pull/2978
|
||||
@@ -11,7 +11,6 @@ import (
|
||||
"path"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -56,14 +55,6 @@ Exit status is 3 if some source data could not be read (incomplete snapshot crea
|
||||
},
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
if backupOptions.Stdin {
|
||||
for _, filename := range backupOptions.FilesFrom {
|
||||
if filename == "-" {
|
||||
return errors.Fatal("cannot use both `--stdin` and `--files-from -`")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var t tomb.Tomb
|
||||
term := termstatus.New(globalOptions.stdout, globalOptions.stderr, globalOptions.Quiet)
|
||||
t.Go(func() error { term.Run(t.Context(globalOptions.ctx)); return nil })
|
||||
@@ -91,12 +82,15 @@ type BackupOptions struct {
|
||||
ExcludeLargerThan string
|
||||
Stdin bool
|
||||
StdinFilename string
|
||||
Tags []string
|
||||
Tags restic.TagLists
|
||||
Host string
|
||||
FilesFrom []string
|
||||
FilesFromVerbatim []string
|
||||
FilesFromRaw []string
|
||||
TimeStamp string
|
||||
WithAtime bool
|
||||
IgnoreInode bool
|
||||
IgnoreCtime bool
|
||||
UseFsSnapshot bool
|
||||
}
|
||||
|
||||
@@ -121,16 +115,23 @@ func init() {
|
||||
f.StringVar(&backupOptions.ExcludeLargerThan, "exclude-larger-than", "", "max `size` of the files to be backed up (allowed suffixes: k/K, m/M, g/G, t/T)")
|
||||
f.BoolVar(&backupOptions.Stdin, "stdin", false, "read backup from stdin")
|
||||
f.StringVar(&backupOptions.StdinFilename, "stdin-filename", "stdin", "`filename` to use when reading from stdin")
|
||||
f.StringArrayVar(&backupOptions.Tags, "tag", nil, "add a `tag` for the new snapshot (can be specified multiple times)")
|
||||
f.Var(&backupOptions.Tags, "tag", "add `tags` for the new snapshot in the format `tag[,tag,...]` (can be specified multiple times)")
|
||||
|
||||
f.StringVarP(&backupOptions.Host, "host", "H", "", "set the `hostname` for the snapshot manually. To prevent an expensive rescan use the \"parent\" flag")
|
||||
f.StringVar(&backupOptions.Host, "hostname", "", "set the `hostname` for the snapshot manually")
|
||||
f.MarkDeprecated("hostname", "use --host")
|
||||
err := f.MarkDeprecated("hostname", "use --host")
|
||||
if err != nil {
|
||||
// MarkDeprecated only returns an error when the flag could not be found
|
||||
panic(err)
|
||||
}
|
||||
|
||||
f.StringArrayVar(&backupOptions.FilesFrom, "files-from", nil, "read the files to backup from `file` (can be combined with file args/can be specified multiple times)")
|
||||
f.StringArrayVar(&backupOptions.FilesFrom, "files-from", nil, "read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
|
||||
f.StringArrayVar(&backupOptions.FilesFromVerbatim, "files-from-verbatim", nil, "read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
|
||||
f.StringArrayVar(&backupOptions.FilesFromRaw, "files-from-raw", nil, "read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
|
||||
f.StringVar(&backupOptions.TimeStamp, "time", "", "`time` of the backup (ex. '2012-11-01 22:08:41') (default: now)")
|
||||
f.BoolVar(&backupOptions.WithAtime, "with-atime", false, "store the atime for all files and directories")
|
||||
f.BoolVar(&backupOptions.IgnoreInode, "ignore-inode", false, "ignore inode number changes when checking for modified files")
|
||||
f.BoolVar(&backupOptions.IgnoreCtime, "ignore-ctime", false, "ignore ctime changes when checking for modified files")
|
||||
if runtime.GOOS == "windows" {
|
||||
f.BoolVar(&backupOptions.UseFsSnapshot, "use-fs-snapshot", false, "use filesystem snapshot where possible (currently only Windows VSS)")
|
||||
}
|
||||
@@ -156,11 +157,13 @@ func filterExisting(items []string) (result []string, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
// readFromFile will read all lines from the given filename and return them as
|
||||
// a string array, if filename is empty readFromFile returns and empty string
|
||||
// array. If filename is a dash (-), readFromFile will read the lines from the
|
||||
// readLines reads all lines from the named file and returns them as a
|
||||
// string slice.
|
||||
//
|
||||
// If filename is empty, readPatternsFromFile returns an empty slice.
|
||||
// If filename is a dash (-), readPatternsFromFile will read the lines from the
|
||||
// standard input.
|
||||
func readLinesFromFile(filename string) ([]string, error) {
|
||||
func readLines(filename string) ([]string, error) {
|
||||
if filename == "" {
|
||||
return nil, nil
|
||||
}
|
||||
@@ -184,29 +187,72 @@ func readLinesFromFile(filename string) ([]string, error) {
|
||||
|
||||
scanner := bufio.NewScanner(bytes.NewReader(data))
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
// ignore empty lines
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
// strip comments
|
||||
if strings.HasPrefix(line, "#") {
|
||||
continue
|
||||
}
|
||||
lines = append(lines, line)
|
||||
lines = append(lines, scanner.Text())
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return lines, nil
|
||||
}
|
||||
|
||||
// readFilenamesFromFileRaw reads a list of filenames from the given file,
|
||||
// or stdin if filename is "-". Each filename is terminated by a zero byte,
|
||||
// which is stripped off.
|
||||
func readFilenamesFromFileRaw(filename string) (names []string, err error) {
|
||||
f := os.Stdin
|
||||
if filename != "-" {
|
||||
if f, err = os.Open(filename); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
names, err = readFilenamesRaw(f)
|
||||
if err != nil {
|
||||
// ignore subsequent errors
|
||||
_ = f.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = f.Close()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return names, nil
|
||||
}
|
||||
|
||||
func readFilenamesRaw(r io.Reader) (names []string, err error) {
|
||||
br := bufio.NewReader(r)
|
||||
for {
|
||||
name, err := br.ReadString(0)
|
||||
switch err {
|
||||
case nil:
|
||||
case io.EOF:
|
||||
if name == "" {
|
||||
return names, nil
|
||||
}
|
||||
return nil, errors.Fatal("--files-from-raw: trailing zero byte missing")
|
||||
default:
|
||||
return nil, err
|
||||
}
|
||||
|
||||
name = name[:len(name)-1]
|
||||
if name == "" {
|
||||
// The empty filename is never valid. Handle this now to
|
||||
// prevent downstream code from erroneously backing up
|
||||
// filepath.Clean("") == ".".
|
||||
return nil, errors.Fatal("--files-from-raw: empty filename in listing")
|
||||
}
|
||||
names = append(names, name)
|
||||
}
|
||||
}
|
||||
|
||||
// Check returns an error when an invalid combination of options was set.
|
||||
func (opts BackupOptions) Check(gopts GlobalOptions, args []string) error {
|
||||
if gopts.password == "" {
|
||||
for _, filename := range opts.FilesFrom {
|
||||
filesFrom := append(append(opts.FilesFrom, opts.FilesFromVerbatim...), opts.FilesFromRaw...)
|
||||
for _, filename := range filesFrom {
|
||||
if filename == "-" {
|
||||
return errors.Fatal("unable to read password from stdin when data is to be read from stdin, use --password-file or $RESTIC_PASSWORD")
|
||||
}
|
||||
@@ -217,6 +263,12 @@ func (opts BackupOptions) Check(gopts GlobalOptions, args []string) error {
|
||||
if len(opts.FilesFrom) > 0 {
|
||||
return errors.Fatal("--stdin and --files-from cannot be used together")
|
||||
}
|
||||
if len(opts.FilesFromVerbatim) > 0 {
|
||||
return errors.Fatal("--stdin and --files-from-verbatim cannot be used together")
|
||||
}
|
||||
if len(opts.FilesFromRaw) > 0 {
|
||||
return errors.Fatal("--stdin and --files-from-raw cannot be used together")
|
||||
}
|
||||
|
||||
if len(args) > 0 {
|
||||
return errors.Fatal("--stdin was specified and files/dirs were listed as arguments")
|
||||
@@ -356,15 +408,19 @@ func collectTargets(opts BackupOptions, args []string) (targets []string, err er
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var lines []string
|
||||
for _, file := range opts.FilesFrom {
|
||||
fromfile, err := readLinesFromFile(file)
|
||||
fromfile, err := readLines(file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// expand wildcards
|
||||
for _, line := range fromfile {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" || line[0] == '#' { // '#' marks a comment.
|
||||
continue
|
||||
}
|
||||
|
||||
var expanded []string
|
||||
expanded, err := filepath.Glob(line)
|
||||
if err != nil {
|
||||
@@ -373,19 +429,38 @@ func collectTargets(opts BackupOptions, args []string) (targets []string, err er
|
||||
if len(expanded) == 0 {
|
||||
Warnf("pattern %q does not match any files, skipping\n", line)
|
||||
}
|
||||
lines = append(lines, expanded...)
|
||||
targets = append(targets, expanded...)
|
||||
}
|
||||
}
|
||||
|
||||
// merge files from files-from into normal args so we can reuse the normal
|
||||
// args checks and have the ability to use both files-from and args at the
|
||||
// same time
|
||||
args = append(args, lines...)
|
||||
if len(args) == 0 && !opts.Stdin {
|
||||
for _, file := range opts.FilesFromVerbatim {
|
||||
fromfile, err := readLines(file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, line := range fromfile {
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
targets = append(targets, line)
|
||||
}
|
||||
}
|
||||
|
||||
for _, file := range opts.FilesFromRaw {
|
||||
fromfile, err := readFilenamesFromFileRaw(file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
targets = append(targets, fromfile...)
|
||||
}
|
||||
|
||||
// Merge args into files-from so we can reuse the normal args checks
|
||||
// and have the ability to use both files-from and args at the same time.
|
||||
targets = append(targets, args...)
|
||||
if len(targets) == 0 && !opts.Stdin {
|
||||
return nil, errors.Fatal("nothing to backup, please specify target files/dirs")
|
||||
}
|
||||
|
||||
targets = args
|
||||
targets, err = filterExisting(targets)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -486,15 +561,7 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
}()
|
||||
gopts.stdout, gopts.stderr = p.Stdout(), p.Stderr()
|
||||
|
||||
if s, ok := os.LookupEnv("RESTIC_PROGRESS_FPS"); ok {
|
||||
fps, err := strconv.Atoi(s)
|
||||
if err == nil && fps >= 1 {
|
||||
if fps > 60 {
|
||||
fps = 60
|
||||
}
|
||||
p.SetMinUpdatePause(time.Second / time.Duration(fps))
|
||||
}
|
||||
}
|
||||
p.SetMinUpdatePause(calculateProgressInterval())
|
||||
|
||||
t.Go(func() error { return p.Run(t.Context(gopts.ctx)) })
|
||||
|
||||
@@ -532,8 +599,12 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
return err
|
||||
}
|
||||
|
||||
if !gopts.JSON && parentSnapshotID != nil {
|
||||
p.V("using parent snapshot %v\n", parentSnapshotID.Str())
|
||||
if !gopts.JSON {
|
||||
if parentSnapshotID != nil {
|
||||
p.P("using parent snapshot %v\n", parentSnapshotID.Str())
|
||||
} else {
|
||||
p.P("no parent snapshot found, will read all files\n")
|
||||
}
|
||||
}
|
||||
|
||||
selectByNameFilter := func(item string) bool {
|
||||
@@ -611,7 +682,15 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
arch.CompleteItem = p.CompleteItem
|
||||
arch.StartFile = p.StartFile
|
||||
arch.CompleteBlob = p.CompleteBlob
|
||||
arch.IgnoreInode = opts.IgnoreInode
|
||||
|
||||
if opts.IgnoreInode {
|
||||
// --ignore-inode implies --ignore-ctime: on FUSE, the ctime is not
|
||||
// reliable either.
|
||||
arch.ChangeIgnoreFlags |= archiver.ChangeIgnoreCtime | archiver.ChangeIgnoreInode
|
||||
}
|
||||
if opts.IgnoreCtime {
|
||||
arch.ChangeIgnoreFlags |= archiver.ChangeIgnoreCtime
|
||||
}
|
||||
|
||||
if parentSnapshotID == nil {
|
||||
parentSnapshotID = &restic.ID{}
|
||||
@@ -619,7 +698,7 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
|
||||
snapshotOpts := archiver.SnapshotOptions{
|
||||
Excludes: opts.Excludes,
|
||||
Tags: opts.Tags,
|
||||
Tags: opts.Tags.Flatten(),
|
||||
Time: timeStamp,
|
||||
Hostname: opts.Host,
|
||||
ParentSnapshot: *parentSnapshotID,
|
||||
|
||||
113
cmd/restic/cmd_backup_test.go
Normal file
113
cmd/restic/cmd_backup_test.go
Normal file
@@ -0,0 +1,113 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"sort"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
rtest "github.com/restic/restic/internal/test"
|
||||
)
|
||||
|
||||
func TestCollectTargets(t *testing.T) {
|
||||
dir, cleanup := rtest.TempDir(t)
|
||||
defer cleanup()
|
||||
|
||||
fooSpace := "foo "
|
||||
barStar := "bar*" // Must sort before the others, below.
|
||||
if runtime.GOOS == "windows" { // Doesn't allow "*" or trailing space.
|
||||
fooSpace = "foo"
|
||||
barStar = "bar"
|
||||
}
|
||||
|
||||
var expect []string
|
||||
for _, filename := range []string{
|
||||
barStar, "baz", "cmdline arg", fooSpace,
|
||||
"fromfile", "fromfile-raw", "fromfile-verbatim", "quux",
|
||||
} {
|
||||
// All mentioned files must exist for collectTargets.
|
||||
f, err := os.Create(filepath.Join(dir, filename))
|
||||
rtest.OK(t, err)
|
||||
rtest.OK(t, f.Close())
|
||||
|
||||
expect = append(expect, f.Name())
|
||||
}
|
||||
|
||||
f1, err := os.Create(filepath.Join(dir, "fromfile"))
|
||||
rtest.OK(t, err)
|
||||
// Empty lines should be ignored. A line starting with '#' is a comment.
|
||||
fmt.Fprintf(f1, "\n%s*\n # here's a comment\n", f1.Name())
|
||||
rtest.OK(t, f1.Close())
|
||||
|
||||
f2, err := os.Create(filepath.Join(dir, "fromfile-verbatim"))
|
||||
rtest.OK(t, err)
|
||||
for _, filename := range []string{fooSpace, barStar} {
|
||||
// Empty lines should be ignored. CR+LF is allowed.
|
||||
fmt.Fprintf(f2, "%s\r\n\n", filepath.Join(dir, filename))
|
||||
}
|
||||
rtest.OK(t, f2.Close())
|
||||
|
||||
f3, err := os.Create(filepath.Join(dir, "fromfile-raw"))
|
||||
rtest.OK(t, err)
|
||||
for _, filename := range []string{"baz", "quux"} {
|
||||
fmt.Fprintf(f3, "%s\x00", filepath.Join(dir, filename))
|
||||
}
|
||||
rtest.OK(t, err)
|
||||
rtest.OK(t, f3.Close())
|
||||
|
||||
opts := BackupOptions{
|
||||
FilesFrom: []string{f1.Name()},
|
||||
FilesFromVerbatim: []string{f2.Name()},
|
||||
FilesFromRaw: []string{f3.Name()},
|
||||
}
|
||||
|
||||
targets, err := collectTargets(opts, []string{filepath.Join(dir, "cmdline arg")})
|
||||
rtest.OK(t, err)
|
||||
sort.Strings(targets)
|
||||
rtest.Equals(t, expect, targets)
|
||||
}
|
||||
|
||||
func TestReadFilenamesRaw(t *testing.T) {
|
||||
// These should all be returned exactly as-is.
|
||||
expected := []string{
|
||||
"\xef\xbb\xbf/utf-8-bom",
|
||||
"/absolute",
|
||||
"../.././relative",
|
||||
"\t\t leading and trailing space \t\t",
|
||||
"newline\nin filename",
|
||||
"not UTF-8: \x80\xff/simple",
|
||||
` / *[]* \ `,
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
for _, name := range expected {
|
||||
buf.WriteString(name)
|
||||
buf.WriteByte(0)
|
||||
}
|
||||
|
||||
got, err := readFilenamesRaw(&buf)
|
||||
rtest.OK(t, err)
|
||||
rtest.Equals(t, expected, got)
|
||||
|
||||
// Empty input is ok.
|
||||
got, err = readFilenamesRaw(strings.NewReader(""))
|
||||
rtest.OK(t, err)
|
||||
rtest.Equals(t, 0, len(got))
|
||||
|
||||
// An empty filename is an error.
|
||||
_, err = readFilenamesRaw(strings.NewReader("foo\x00\x00"))
|
||||
rtest.Assert(t, err != nil, "no error for zero byte")
|
||||
rtest.Assert(t, strings.Contains(err.Error(), "empty filename"),
|
||||
"wrong error message: %v", err.Error())
|
||||
|
||||
// No trailing NUL byte is an error, because it likely means we're
|
||||
// reading a line-oriented text file (someone forgot -print0).
|
||||
_, err = readFilenamesRaw(strings.NewReader("simple.txt"))
|
||||
rtest.Assert(t, err != nil, "no error for zero byte")
|
||||
rtest.Assert(t, strings.Contains(err.Error(), "zero byte"),
|
||||
"wrong error message: %v", err.Error())
|
||||
}
|
||||
@@ -148,7 +148,7 @@ func runCache(opts CacheOptions, gopts GlobalOptions, args []string) error {
|
||||
})
|
||||
}
|
||||
|
||||
tab.Write(gopts.stdout)
|
||||
_ = tab.Write(gopts.stdout)
|
||||
Printf("%d cache dirs in %s\n", len(dirs), cachedir)
|
||||
|
||||
return nil
|
||||
|
||||
@@ -42,10 +42,13 @@ func runCat(gopts GlobalOptions, args []string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
lock, err := lockRepo(gopts.ctx, repo)
|
||||
defer unlockRepo(lock)
|
||||
if err != nil {
|
||||
return err
|
||||
if !gopts.NoLock {
|
||||
lock, err := lockRepo(gopts.ctx, repo)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
defer unlockRepo(lock)
|
||||
}
|
||||
|
||||
tpe := args[0]
|
||||
@@ -165,7 +168,8 @@ func runCat(gopts GlobalOptions, args []string) error {
|
||||
|
||||
case "blob":
|
||||
for _, t := range []restic.BlobType{restic.DataBlob, restic.TreeBlob} {
|
||||
if !repo.Index().Has(id, t) {
|
||||
bh := restic.BlobHandle{ID: id, Type: t}
|
||||
if !repo.Index().Has(bh) {
|
||||
continue
|
||||
}
|
||||
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"math/rand"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
@@ -53,25 +54,38 @@ func init() {
|
||||
|
||||
f := cmdCheck.Flags()
|
||||
f.BoolVar(&checkOptions.ReadData, "read-data", false, "read all data blobs")
|
||||
f.StringVar(&checkOptions.ReadDataSubset, "read-data-subset", "", "read subset n of m data packs (format: `n/m`)")
|
||||
f.StringVar(&checkOptions.ReadDataSubset, "read-data-subset", "", "read a `subset` of data packs, specified as 'n/t' for specific subset or either 'x%' or 'x.y%' for random subset")
|
||||
f.BoolVar(&checkOptions.CheckUnused, "check-unused", false, "find unused blobs")
|
||||
f.BoolVar(&checkOptions.WithCache, "with-cache", false, "use the cache")
|
||||
}
|
||||
|
||||
func checkFlags(opts CheckOptions) error {
|
||||
if opts.ReadData && opts.ReadDataSubset != "" {
|
||||
return errors.Fatalf("check flags --read-data and --read-data-subset cannot be used together")
|
||||
return errors.Fatal("check flags --read-data and --read-data-subset cannot be used together")
|
||||
}
|
||||
if opts.ReadDataSubset != "" {
|
||||
dataSubset, err := stringToIntSlice(opts.ReadDataSubset)
|
||||
if err != nil || len(dataSubset) != 2 {
|
||||
return errors.Fatalf("check flag --read-data-subset must have two positive integer values, e.g. --read-data-subset=1/2")
|
||||
}
|
||||
if dataSubset[0] == 0 || dataSubset[1] == 0 || dataSubset[0] > dataSubset[1] {
|
||||
return errors.Fatalf("check flag --read-data-subset=n/t values must be positive integers, and n <= t, e.g. --read-data-subset=1/2")
|
||||
}
|
||||
if dataSubset[1] > totalBucketsMax {
|
||||
return errors.Fatalf("check flag --read-data-subset=n/t t must be at most %d", totalBucketsMax)
|
||||
argumentError := errors.Fatal("check flag --read-data-subset must have two positive integer values or a percentage, e.g. --read-data-subset=1/2 or --read-data-subset=2.5%%")
|
||||
if err == nil {
|
||||
if len(dataSubset) != 2 {
|
||||
return argumentError
|
||||
}
|
||||
if dataSubset[0] == 0 || dataSubset[1] == 0 || dataSubset[0] > dataSubset[1] {
|
||||
return errors.Fatal("check flag --read-data-subset=n/t values must be positive integers, and n <= t, e.g. --read-data-subset=1/2")
|
||||
}
|
||||
if dataSubset[1] > totalBucketsMax {
|
||||
return errors.Fatalf("check flag --read-data-subset=n/t t must be at most %d", totalBucketsMax)
|
||||
}
|
||||
} else {
|
||||
percentage, err := parsePercentage(opts.ReadDataSubset)
|
||||
if err != nil {
|
||||
return argumentError
|
||||
}
|
||||
|
||||
if percentage <= 0.0 || percentage > 100.0 {
|
||||
return errors.Fatal(
|
||||
"check flag --read-data-subset=n% n must be above 0.0% and at most 100.0%")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -98,6 +112,21 @@ func stringToIntSlice(param string) (split []uint, err error) {
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// ParsePercentage parses a percentage string of the form "X%" where X is a float constant,
|
||||
// and returns the value of that constant. It does not check the range of the value.
|
||||
func parsePercentage(s string) (float64, error) {
|
||||
if !strings.HasSuffix(s, "%") {
|
||||
return 0, errors.Errorf(`parsePercentage: %q does not end in "%%"`, s)
|
||||
}
|
||||
s = s[:len(s)-1]
|
||||
|
||||
p, err := strconv.ParseFloat(s, 64)
|
||||
if err != nil {
|
||||
return 0, errors.Errorf("parsePercentage: %v", err)
|
||||
}
|
||||
return p, nil
|
||||
}
|
||||
|
||||
// prepareCheckCache configures a special cache directory for check.
|
||||
//
|
||||
// * if --with-cache is specified, the default cache is used
|
||||
@@ -165,7 +194,7 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
|
||||
}
|
||||
}
|
||||
|
||||
chkr := checker.New(repo)
|
||||
chkr := checker.New(repo, opts.CheckUnused)
|
||||
|
||||
Verbosef("load indexes\n")
|
||||
hints, errs := chkr.LoadIndex(gopts.ctx)
|
||||
@@ -212,7 +241,11 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
|
||||
|
||||
Verbosef("check snapshots, trees and blobs\n")
|
||||
errChan = make(chan error)
|
||||
go chkr.Structure(gopts.ctx, errChan)
|
||||
go func() {
|
||||
bar := newProgressMax(!gopts.Quiet, 0, "snapshots")
|
||||
defer bar.Done()
|
||||
chkr.Structure(gopts.ctx, bar, errChan)
|
||||
}()
|
||||
|
||||
for err := range errChan {
|
||||
errorsFound = true
|
||||
@@ -227,29 +260,15 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
|
||||
}
|
||||
|
||||
if opts.CheckUnused {
|
||||
for _, id := range chkr.UnusedBlobs() {
|
||||
for _, id := range chkr.UnusedBlobs(gopts.ctx) {
|
||||
Verbosef("unused blob %v\n", id)
|
||||
errorsFound = true
|
||||
}
|
||||
}
|
||||
|
||||
doReadData := func(bucket, totalBuckets uint) {
|
||||
packs := restic.IDSet{}
|
||||
for pack := range chkr.GetPacks() {
|
||||
// If we ever check more than the first byte
|
||||
// of pack, update totalBucketsMax.
|
||||
if (uint(pack[0]) % totalBuckets) == (bucket - 1) {
|
||||
packs.Insert(pack)
|
||||
}
|
||||
}
|
||||
doReadData := func(packs map[restic.ID]int64) {
|
||||
packCount := uint64(len(packs))
|
||||
|
||||
if packCount < chkr.CountPacks() {
|
||||
Verbosef(fmt.Sprintf("read group #%d of %d data packs (out of total %d packs in %d groups)\n", bucket, packCount, chkr.CountPacks(), totalBuckets))
|
||||
} else {
|
||||
Verbosef("read all data\n")
|
||||
}
|
||||
|
||||
p := newProgressMax(!gopts.Quiet, packCount, "packs")
|
||||
errChan := make(chan error)
|
||||
|
||||
@@ -259,14 +278,31 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
|
||||
errorsFound = true
|
||||
Warnf("%v\n", err)
|
||||
}
|
||||
p.Done()
|
||||
}
|
||||
|
||||
switch {
|
||||
case opts.ReadData:
|
||||
doReadData(1, 1)
|
||||
Verbosef("read all data\n")
|
||||
doReadData(selectPacksByBucket(chkr.GetPacks(), 1, 1))
|
||||
case opts.ReadDataSubset != "":
|
||||
dataSubset, _ := stringToIntSlice(opts.ReadDataSubset)
|
||||
doReadData(dataSubset[0], dataSubset[1])
|
||||
var packs map[restic.ID]int64
|
||||
dataSubset, err := stringToIntSlice(opts.ReadDataSubset)
|
||||
if err == nil {
|
||||
bucket := dataSubset[0]
|
||||
totalBuckets := dataSubset[1]
|
||||
packs = selectPacksByBucket(chkr.GetPacks(), bucket, totalBuckets)
|
||||
packCount := uint64(len(packs))
|
||||
Verbosef("read group #%d of %d data packs (out of total %d packs in %d groups)\n", bucket, packCount, chkr.CountPacks(), totalBuckets)
|
||||
} else {
|
||||
percentage, _ := parsePercentage(opts.ReadDataSubset)
|
||||
packs = selectRandomPacksByPercentage(chkr.GetPacks(), percentage)
|
||||
Verbosef("read %.1f%% of data packs\n", percentage)
|
||||
}
|
||||
if packs == nil {
|
||||
return errors.Fatal("internal error: failed to select packs to check")
|
||||
}
|
||||
doReadData(packs)
|
||||
}
|
||||
|
||||
if errorsFound {
|
||||
@@ -277,3 +313,42 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// selectPacksByBucket selects subsets of packs by ranges of buckets.
|
||||
func selectPacksByBucket(allPacks map[restic.ID]int64, bucket, totalBuckets uint) map[restic.ID]int64 {
|
||||
packs := make(map[restic.ID]int64)
|
||||
for pack, size := range allPacks {
|
||||
// If we ever check more than the first byte
|
||||
// of pack, update totalBucketsMax.
|
||||
if (uint(pack[0]) % totalBuckets) == (bucket - 1) {
|
||||
packs[pack] = size
|
||||
}
|
||||
}
|
||||
return packs
|
||||
}
|
||||
|
||||
// selectRandomPacksByPercentage selects the given percentage of packs which are randomly choosen.
|
||||
func selectRandomPacksByPercentage(allPacks map[restic.ID]int64, percentage float64) map[restic.ID]int64 {
|
||||
packCount := len(allPacks)
|
||||
packsToCheck := int(float64(packCount) * (percentage / 100.0))
|
||||
if packsToCheck < 1 {
|
||||
packsToCheck = 1
|
||||
}
|
||||
timeNs := time.Now().UnixNano()
|
||||
r := rand.New(rand.NewSource(timeNs))
|
||||
idx := r.Perm(packCount)
|
||||
|
||||
var keys []restic.ID
|
||||
for k := range allPacks {
|
||||
keys = append(keys, k)
|
||||
}
|
||||
|
||||
packs := make(map[restic.ID]int64)
|
||||
|
||||
for i := 0; i < packsToCheck; i++ {
|
||||
id := keys[idx[i]]
|
||||
packs[id] = allPacks[id]
|
||||
}
|
||||
|
||||
return packs
|
||||
}
|
||||
|
||||
124
cmd/restic/cmd_check_test.go
Normal file
124
cmd/restic/cmd_check_test.go
Normal file
@@ -0,0 +1,124 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"math"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/restic/restic/internal/restic"
|
||||
rtest "github.com/restic/restic/internal/test"
|
||||
)
|
||||
|
||||
func TestParsePercentage(t *testing.T) {
|
||||
testCases := []struct {
|
||||
input string
|
||||
output float64
|
||||
expectError bool
|
||||
}{
|
||||
{"0%", 0.0, false},
|
||||
{"1%", 1.0, false},
|
||||
{"100%", 100.0, false},
|
||||
{"123%", 123.0, false},
|
||||
{"123.456%", 123.456, false},
|
||||
{"0.742%", 0.742, false},
|
||||
{"-100%", -100.0, false},
|
||||
{" 1%", 0.0, true},
|
||||
{"1 %", 0.0, true},
|
||||
{"1% ", 0.0, true},
|
||||
}
|
||||
for _, testCase := range testCases {
|
||||
output, err := parsePercentage(testCase.input)
|
||||
|
||||
if testCase.expectError {
|
||||
rtest.Assert(t, err != nil, "Expected error for case %s", testCase.input)
|
||||
rtest.Assert(t, output == 0.0, "Expected output to be 0.0, got %s", output)
|
||||
} else {
|
||||
rtest.Assert(t, err == nil, "Expected no error for case %s", testCase.input)
|
||||
rtest.Assert(t, math.Abs(testCase.output-output) < 0.00001, "Expected %f, got %f",
|
||||
testCase.output, output)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestStringToIntSlice(t *testing.T) {
|
||||
testCases := []struct {
|
||||
input string
|
||||
output []uint
|
||||
expectError bool
|
||||
}{
|
||||
{"3/5", []uint{3, 5}, false},
|
||||
{"1/100", []uint{1, 100}, false},
|
||||
{"abc", nil, true},
|
||||
{"1/a", nil, true},
|
||||
{"/", nil, true},
|
||||
}
|
||||
for _, testCase := range testCases {
|
||||
output, err := stringToIntSlice(testCase.input)
|
||||
|
||||
if testCase.expectError {
|
||||
rtest.Assert(t, err != nil, "Expected error for case %s", testCase.input)
|
||||
rtest.Assert(t, output == nil, "Expected output to be nil, got %s", output)
|
||||
} else {
|
||||
rtest.Assert(t, err == nil, "Expected no error for case %s", testCase.input)
|
||||
rtest.Assert(t, len(output) == 2, "Invalid output length for case %s", testCase.input)
|
||||
rtest.Assert(t, reflect.DeepEqual(output, testCase.output), "Expected %f, got %f",
|
||||
testCase.output, output)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestSelectPacksByBucket(t *testing.T) {
|
||||
var testPacks = make(map[restic.ID]int64)
|
||||
for i := 1; i <= 10; i++ {
|
||||
id := restic.NewRandomID()
|
||||
// ensure relevant part of generated id is reproducable
|
||||
id[0] = byte(i)
|
||||
testPacks[id] = 0
|
||||
}
|
||||
|
||||
selectedPacks := selectPacksByBucket(testPacks, 0, 10)
|
||||
rtest.Assert(t, len(selectedPacks) == 0, "Expected 0 selected packs")
|
||||
|
||||
for i := uint(1); i <= 5; i++ {
|
||||
selectedPacks = selectPacksByBucket(testPacks, i, 5)
|
||||
rtest.Assert(t, len(selectedPacks) == 2, "Expected 2 selected packs")
|
||||
}
|
||||
|
||||
selectedPacks = selectPacksByBucket(testPacks, 1, 1)
|
||||
rtest.Assert(t, len(selectedPacks) == 10, "Expected 10 selected packs")
|
||||
for testPack := range testPacks {
|
||||
_, ok := selectedPacks[testPack]
|
||||
rtest.Assert(t, ok, "Expected input and output to be equal")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSelectRandomPacksByPercentage(t *testing.T) {
|
||||
var testPacks = make(map[restic.ID]int64)
|
||||
for i := 1; i <= 10; i++ {
|
||||
testPacks[restic.NewRandomID()] = 0
|
||||
}
|
||||
|
||||
selectedPacks := selectRandomPacksByPercentage(testPacks, 0.0)
|
||||
rtest.Assert(t, len(selectedPacks) == 1, "Expected 1 selected packs")
|
||||
|
||||
selectedPacks = selectRandomPacksByPercentage(testPacks, 10.0)
|
||||
rtest.Assert(t, len(selectedPacks) == 1, "Expected 1 selected pack")
|
||||
for pack := range selectedPacks {
|
||||
_, ok := testPacks[pack]
|
||||
rtest.Assert(t, ok, "Unexpected selection")
|
||||
}
|
||||
|
||||
selectedPacks = selectRandomPacksByPercentage(testPacks, 50.0)
|
||||
rtest.Assert(t, len(selectedPacks) == 5, "Expected 5 selected packs")
|
||||
for pack := range selectedPacks {
|
||||
_, ok := testPacks[pack]
|
||||
rtest.Assert(t, ok, "Unexpected item in selection")
|
||||
}
|
||||
|
||||
selectedPacks = selectRandomPacksByPercentage(testPacks, 100.0)
|
||||
rtest.Assert(t, len(selectedPacks) == 10, "Expected 10 selected packs")
|
||||
for testPack := range testPacks {
|
||||
_, ok := selectedPacks[testPack]
|
||||
rtest.Assert(t, ok, "Expected input and output to be equal")
|
||||
}
|
||||
}
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
|
||||
"github.com/restic/restic/internal/debug"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"golang.org/x/sync/errgroup"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
@@ -14,12 +15,19 @@ var cmdCopy = &cobra.Command{
|
||||
Use: "copy [flags] [snapshotID ...]",
|
||||
Short: "Copy snapshots from one repository to another",
|
||||
Long: `
|
||||
The "copy" command copies one or more snapshots from one repository to another
|
||||
repository. Note that this will have to read (download) and write (upload) the
|
||||
entire snapshot(s) due to the different encryption keys on the source and
|
||||
destination, and that transferred files are not re-chunked, which may break
|
||||
their deduplication. This can be mitigated by the "--copy-chunker-params"
|
||||
option when initializing a new destination repository using the "init" command.
|
||||
The "copy" command copies one or more snapshots from one repository to another.
|
||||
|
||||
NOTE: This process will have to both download (read) and upload (write) the
|
||||
entire snapshot(s) due to the different encryption keys used in the source and
|
||||
destination repositories. This /may incur higher bandwidth usage and costs/ than
|
||||
expected during normal backup runs.
|
||||
|
||||
NOTE: The copying process does not re-chunk files, which may break deduplication
|
||||
between the files copied and files already stored in the destination repository.
|
||||
This means that copied files, which existed in both the source and destination
|
||||
repository, /may occupy up to twice their space/ in the destination repository.
|
||||
This can be mitigated by the "--copy-chunker-params" option when initializing a
|
||||
new destination repository using the "init" command.
|
||||
`,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runCopy(copyOptions, globalOptions, args)
|
||||
@@ -96,12 +104,8 @@ func runCopy(opts CopyOptions, gopts GlobalOptions, args []string) error {
|
||||
dstSnapshotByOriginal[*sn.ID()] = append(dstSnapshotByOriginal[*sn.ID()], sn)
|
||||
}
|
||||
|
||||
cloner := &treeCloner{
|
||||
srcRepo: srcRepo,
|
||||
dstRepo: dstRepo,
|
||||
visitedTrees: restic.NewIDSet(),
|
||||
buf: nil,
|
||||
}
|
||||
// remember already processed trees across all snapshots
|
||||
visitedTrees := restic.NewIDSet()
|
||||
|
||||
for sn := range FindFilteredSnapshots(ctx, srcRepo, opts.Hosts, opts.Tags, opts.Paths, args) {
|
||||
Verbosef("\nsnapshot %s of %v at %s)\n", sn.ID().Str(), sn.Paths, sn.Time)
|
||||
@@ -126,7 +130,7 @@ func runCopy(opts CopyOptions, gopts GlobalOptions, args []string) error {
|
||||
}
|
||||
Verbosef(" copy started, this may take a while...\n")
|
||||
|
||||
if err := cloner.copyTree(ctx, *sn.Tree); err != nil {
|
||||
if err := copyTree(ctx, srcRepo, dstRepo, visitedTrees, *sn.Tree); err != nil {
|
||||
return err
|
||||
}
|
||||
debug.Log("tree copied")
|
||||
@@ -170,64 +174,64 @@ func similarSnapshots(sna *restic.Snapshot, snb *restic.Snapshot) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
type treeCloner struct {
|
||||
srcRepo restic.Repository
|
||||
dstRepo restic.Repository
|
||||
visitedTrees restic.IDSet
|
||||
buf []byte
|
||||
}
|
||||
func copyTree(ctx context.Context, srcRepo restic.Repository, dstRepo restic.Repository,
|
||||
visitedTrees restic.IDSet, rootTreeID restic.ID) error {
|
||||
|
||||
func (t *treeCloner) copyTree(ctx context.Context, treeID restic.ID) error {
|
||||
// We have already processed this tree
|
||||
if t.visitedTrees.Has(treeID) {
|
||||
wg, ctx := errgroup.WithContext(ctx)
|
||||
|
||||
treeStream := restic.StreamTrees(ctx, wg, srcRepo, restic.IDs{rootTreeID}, func(treeID restic.ID) bool {
|
||||
visited := visitedTrees.Has(treeID)
|
||||
visitedTrees.Insert(treeID)
|
||||
return visited
|
||||
}, nil)
|
||||
|
||||
wg.Go(func() error {
|
||||
// reused buffer
|
||||
var buf []byte
|
||||
|
||||
for tree := range treeStream {
|
||||
if tree.Error != nil {
|
||||
return fmt.Errorf("LoadTree(%v) returned error %v", tree.ID.Str(), tree.Error)
|
||||
}
|
||||
|
||||
// Do we already have this tree blob?
|
||||
if !dstRepo.Index().Has(restic.BlobHandle{ID: tree.ID, Type: restic.TreeBlob}) {
|
||||
newTreeID, err := dstRepo.SaveTree(ctx, tree.Tree)
|
||||
if err != nil {
|
||||
return fmt.Errorf("SaveTree(%v) returned error %v", tree.ID.Str(), err)
|
||||
}
|
||||
// Assurance only.
|
||||
if newTreeID != tree.ID {
|
||||
return fmt.Errorf("SaveTree(%v) returned unexpected id %s", tree.ID.Str(), newTreeID.Str())
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: parallelize blob down/upload
|
||||
|
||||
for _, entry := range tree.Nodes {
|
||||
// Recursion into directories is handled by StreamTrees
|
||||
// Copy the blobs for this file.
|
||||
for _, blobID := range entry.Content {
|
||||
// Do we already have this data blob?
|
||||
if dstRepo.Index().Has(restic.BlobHandle{ID: blobID, Type: restic.DataBlob}) {
|
||||
continue
|
||||
}
|
||||
debug.Log("Copying blob %s\n", blobID.Str())
|
||||
var err error
|
||||
buf, err = srcRepo.LoadBlob(ctx, restic.DataBlob, blobID, buf)
|
||||
if err != nil {
|
||||
return fmt.Errorf("LoadBlob(%v) returned error %v", blobID, err)
|
||||
}
|
||||
|
||||
_, _, err = dstRepo.SaveBlob(ctx, restic.DataBlob, buf, blobID, false)
|
||||
if err != nil {
|
||||
return fmt.Errorf("SaveBlob(%v) returned error %v", blobID, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
tree, err := t.srcRepo.LoadTree(ctx, treeID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("LoadTree(%v) returned error %v", treeID.Str(), err)
|
||||
}
|
||||
t.visitedTrees.Insert(treeID)
|
||||
|
||||
// Do we already have this tree blob?
|
||||
if !t.dstRepo.Index().Has(treeID, restic.TreeBlob) {
|
||||
newTreeID, err := t.dstRepo.SaveTree(ctx, tree)
|
||||
if err != nil {
|
||||
return fmt.Errorf("SaveTree(%v) returned error %v", treeID.Str(), err)
|
||||
}
|
||||
// Assurance only.
|
||||
if newTreeID != treeID {
|
||||
return fmt.Errorf("SaveTree(%v) returned unexpected id %s", treeID.Str(), newTreeID.Str())
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: parellize this stuff, likely only needed inside a tree.
|
||||
|
||||
for _, entry := range tree.Nodes {
|
||||
// If it is a directory, recurse
|
||||
if entry.Type == "dir" && entry.Subtree != nil {
|
||||
if err := t.copyTree(ctx, *entry.Subtree); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// Copy the blobs for this file.
|
||||
for _, blobID := range entry.Content {
|
||||
// Do we already have this data blob?
|
||||
if t.dstRepo.Index().Has(blobID, restic.DataBlob) {
|
||||
continue
|
||||
}
|
||||
debug.Log("Copying blob %s\n", blobID.Str())
|
||||
t.buf, err = t.srcRepo.LoadBlob(ctx, restic.DataBlob, blobID, t.buf)
|
||||
if err != nil {
|
||||
return fmt.Errorf("LoadBlob(%v) returned error %v", blobID, err)
|
||||
}
|
||||
|
||||
_, _, err = t.dstRepo.SaveBlob(ctx, restic.DataBlob, t.buf, blobID, false)
|
||||
if err != nil {
|
||||
return fmt.Errorf("SaveBlob(%v) returned error %v", blobID, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
return wg.Wait()
|
||||
}
|
||||
|
||||
@@ -55,8 +55,7 @@ func prettyPrintJSON(wr io.Writer, item interface{}) error {
|
||||
}
|
||||
|
||||
func debugPrintSnapshots(ctx context.Context, repo *repository.Repository, wr io.Writer) error {
|
||||
return repo.List(ctx, restic.SnapshotFile, func(id restic.ID, size int64) error {
|
||||
snapshot, err := restic.LoadSnapshot(ctx, repo, id)
|
||||
return restic.ForAllSnapshots(ctx, repo, nil, func(id restic.ID, snapshot *restic.Snapshot, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -87,7 +86,7 @@ func printPacks(ctx context.Context, repo *repository.Repository, wr io.Writer)
|
||||
return repo.List(ctx, restic.PackFile, func(id restic.ID, size int64) error {
|
||||
h := restic.Handle{Type: restic.PackFile, Name: id.String()}
|
||||
|
||||
blobs, err := pack.List(repo.Key(), restic.ReaderAt(ctx, repo.Backend(), h), size)
|
||||
blobs, _, err := pack.List(repo.Key(), restic.ReaderAt(ctx, repo.Backend(), h), size)
|
||||
if err != nil {
|
||||
Warnf("error for pack %v: %v\n", id.Str(), err)
|
||||
return nil
|
||||
@@ -111,10 +110,8 @@ func printPacks(ctx context.Context, repo *repository.Repository, wr io.Writer)
|
||||
}
|
||||
|
||||
func dumpIndexes(ctx context.Context, repo restic.Repository, wr io.Writer) error {
|
||||
return repo.List(ctx, restic.IndexFile, func(id restic.ID, size int64) error {
|
||||
return repository.ForAllIndexes(ctx, repo, func(id restic.ID, idx *repository.Index, oldFormat bool, err error) error {
|
||||
Printf("index_id: %v\n", id)
|
||||
|
||||
idx, err := repository.LoadIndex(ctx, repo, id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -55,9 +55,8 @@ func init() {
|
||||
func loadSnapshot(ctx context.Context, repo *repository.Repository, desc string) (*restic.Snapshot, error) {
|
||||
id, err := restic.FindSnapshot(ctx, repo, desc)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.Fatal(err.Error())
|
||||
}
|
||||
|
||||
return restic.LoadSnapshot(ctx, repo, id)
|
||||
}
|
||||
|
||||
@@ -365,6 +364,8 @@ func runDiff(opts DiffOptions, gopts GlobalOptions, args []string) error {
|
||||
}
|
||||
|
||||
stats := NewDiffStats()
|
||||
stats.BlobsBefore.Insert(restic.BlobHandle{Type: restic.TreeBlob, ID: *sn1.Tree})
|
||||
stats.BlobsAfter.Insert(restic.BlobHandle{Type: restic.TreeBlob, ID: *sn2.Tree})
|
||||
|
||||
err = c.diffTree(ctx, stats, "/", *sn1.Tree, *sn2.Tree)
|
||||
if err != nil {
|
||||
|
||||
@@ -21,8 +21,8 @@ var cmdDump = &cobra.Command{
|
||||
Long: `
|
||||
The "dump" command extracts files from a snapshot from the repository. If a
|
||||
single file is selected, it prints its contents to stdout. Folders are output
|
||||
as a tar file containing the contents of the specified folder. Pass "/" as
|
||||
file name to dump the whole snapshot as a tar file.
|
||||
as a tar (default) or zip file containing the contents of the specified folder.
|
||||
Pass "/" as file name to dump the whole snapshot as an archive file.
|
||||
|
||||
The special snapshot "latest" can be used to use the latest snapshot in the
|
||||
repository.
|
||||
@@ -40,9 +40,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
||||
|
||||
// DumpOptions collects all options for the dump command.
|
||||
type DumpOptions struct {
|
||||
Hosts []string
|
||||
Paths []string
|
||||
Tags restic.TagLists
|
||||
Hosts []string
|
||||
Paths []string
|
||||
Tags restic.TagLists
|
||||
Archive string
|
||||
}
|
||||
|
||||
var dumpOptions DumpOptions
|
||||
@@ -54,6 +55,7 @@ func init() {
|
||||
flags.StringArrayVarP(&dumpOptions.Hosts, "host", "H", nil, `only consider snapshots for this host when the snapshot ID is "latest" (can be specified multiple times)`)
|
||||
flags.Var(&dumpOptions.Tags, "tag", "only consider snapshots which include this `taglist` for snapshot ID \"latest\"")
|
||||
flags.StringArrayVar(&dumpOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path` for snapshot ID \"latest\"")
|
||||
flags.StringVarP(&dumpOptions.Archive, "archive", "a", "tar", "set archive `format` as \"tar\" or \"zip\"")
|
||||
}
|
||||
|
||||
func splitPath(p string) []string {
|
||||
@@ -65,8 +67,7 @@ func splitPath(p string) []string {
|
||||
return append(s, f)
|
||||
}
|
||||
|
||||
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repository, prefix string, pathComponents []string) error {
|
||||
|
||||
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repository, prefix string, pathComponents []string, writeDump dump.WriteDump) error {
|
||||
if tree == nil {
|
||||
return fmt.Errorf("called with a nil tree")
|
||||
}
|
||||
@@ -81,10 +82,10 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repositor
|
||||
// If we print / we need to assume that there are multiple nodes at that
|
||||
// level in the tree.
|
||||
if pathComponents[0] == "" {
|
||||
if err := checkStdoutTar(); err != nil {
|
||||
if err := checkStdoutArchive(); err != nil {
|
||||
return err
|
||||
}
|
||||
return dump.WriteTar(ctx, repo, tree, "/", os.Stdout)
|
||||
return writeDump(ctx, repo, tree, "/", os.Stdout)
|
||||
}
|
||||
|
||||
item := filepath.Join(prefix, pathComponents[0])
|
||||
@@ -100,16 +101,16 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repositor
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "cannot load subtree for %q", item)
|
||||
}
|
||||
return printFromTree(ctx, subtree, repo, item, pathComponents[1:])
|
||||
return printFromTree(ctx, subtree, repo, item, pathComponents[1:], writeDump)
|
||||
case dump.IsDir(node):
|
||||
if err := checkStdoutTar(); err != nil {
|
||||
if err := checkStdoutArchive(); err != nil {
|
||||
return err
|
||||
}
|
||||
subtree, err := repo.LoadTree(ctx, *node.Subtree)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return dump.WriteTar(ctx, repo, subtree, item, os.Stdout)
|
||||
return writeDump(ctx, repo, subtree, item, os.Stdout)
|
||||
case l > 1:
|
||||
return fmt.Errorf("%q should be a dir, but is a %q", item, node.Type)
|
||||
case !dump.IsFile(node):
|
||||
@@ -127,6 +128,16 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
|
||||
return errors.Fatal("no file and no snapshot ID specified")
|
||||
}
|
||||
|
||||
var wd dump.WriteDump
|
||||
switch opts.Archive {
|
||||
case "tar":
|
||||
wd = dump.WriteTar
|
||||
case "zip":
|
||||
wd = dump.WriteZip
|
||||
default:
|
||||
return fmt.Errorf("unknown archive format %q", opts.Archive)
|
||||
}
|
||||
|
||||
snapshotIDString := args[0]
|
||||
pathToPrint := args[1]
|
||||
|
||||
@@ -176,7 +187,7 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
|
||||
Exitf(2, "loading tree for snapshot %q failed: %v", snapshotIDString, err)
|
||||
}
|
||||
|
||||
err = printFromTree(ctx, tree, repo, "/", splittedPath)
|
||||
err = printFromTree(ctx, tree, repo, "/", splittedPath, wd)
|
||||
if err != nil {
|
||||
Exitf(2, "cannot dump file: %v", err)
|
||||
}
|
||||
@@ -184,7 +195,7 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkStdoutTar() error {
|
||||
func checkStdoutArchive() error {
|
||||
if stdoutIsTerminal() {
|
||||
return fmt.Errorf("stdout is the terminal, please redirect output")
|
||||
}
|
||||
|
||||
@@ -394,7 +394,6 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
|
||||
delete(f.blobIDs, idStr[:shortStr])
|
||||
}
|
||||
f.out.PrintObject("blob", idStr, nodepath, parentTreeID.String(), sn)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
@@ -465,7 +464,7 @@ func (f *Finder) findObjectPack(ctx context.Context, id string, t restic.BlobTyp
|
||||
return
|
||||
}
|
||||
|
||||
blobs := idx.Lookup(rid, t)
|
||||
blobs := idx.Lookup(restic.BlobHandle{ID: rid, Type: t})
|
||||
if len(blobs) == 0 {
|
||||
Printf("Object %s not found in the index\n", rid.Str())
|
||||
return
|
||||
@@ -564,7 +563,10 @@ func runFind(opts FindOptions, gopts GlobalOptions, args []string) error {
|
||||
}
|
||||
|
||||
if opts.PackID {
|
||||
f.packsToBlobs(ctx, []string{f.pat.pattern[0]}) // TODO: support multiple packs
|
||||
err := f.packsToBlobs(ctx, []string{f.pat.pattern[0]}) // TODO: support multiple packs
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
for sn := range FindFilteredSnapshots(ctx, repo, opts.Hosts, opts.Tags, opts.Paths, opts.Snapshots) {
|
||||
|
||||
@@ -68,7 +68,11 @@ func init() {
|
||||
f.Var(&forgetOptions.KeepTags, "keep-tag", "keep snapshots with this `taglist` (can be specified multiple times)")
|
||||
f.StringArrayVar(&forgetOptions.Hosts, "host", nil, "only consider snapshots with the given `host` (can be specified multiple times)")
|
||||
f.StringArrayVar(&forgetOptions.Hosts, "hostname", nil, "only consider snapshots with the given `hostname` (can be specified multiple times)")
|
||||
f.MarkDeprecated("hostname", "use --host")
|
||||
err := f.MarkDeprecated("hostname", "use --host")
|
||||
if err != nil {
|
||||
// MarkDeprecated only returns an error when the flag is not found
|
||||
panic(err)
|
||||
}
|
||||
|
||||
f.Var(&forgetOptions.Tags, "tag", "only consider snapshots which include this `taglist` in the format `tag[,tag,...]` (can be specified multiple times)")
|
||||
|
||||
@@ -80,9 +84,15 @@ func init() {
|
||||
f.BoolVar(&forgetOptions.Prune, "prune", false, "automatically run the 'prune' command if snapshots have been removed")
|
||||
|
||||
f.SortFlags = false
|
||||
addPruneOptions(cmdForget)
|
||||
}
|
||||
|
||||
func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
|
||||
err := verifyPruneOptions(&pruneOptions)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
repo, err := OpenRepository(gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -204,8 +214,12 @@ func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
|
||||
}
|
||||
}
|
||||
|
||||
if len(removeSnIDs) > 0 && opts.Prune && !opts.DryRun {
|
||||
return pruneRepository(gopts, repo)
|
||||
if len(removeSnIDs) > 0 && opts.Prune {
|
||||
if !gopts.JSON {
|
||||
Verbosef("%d snapshots have been removed, running prune\n", len(removeSnIDs))
|
||||
}
|
||||
pruneOptions.DryRun = opts.DryRun
|
||||
return runPruneWithRepo(pruneOptions, gopts, repo, removeSnIDs)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
@@ -43,10 +43,6 @@ func init() {
|
||||
}
|
||||
|
||||
func runInit(opts InitOptions, gopts GlobalOptions, args []string) error {
|
||||
if gopts.Repo == "" {
|
||||
return errors.Fatal("Please specify repository location (-r)")
|
||||
}
|
||||
|
||||
chunkerPolynomial, err := maybeReadChunkerPolynomial(opts, gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
|
||||
@@ -60,8 +60,7 @@ func runList(cmd *cobra.Command, opts GlobalOptions, args []string) error {
|
||||
case "locks":
|
||||
t = restic.LockFile
|
||||
case "blobs":
|
||||
return repo.List(opts.ctx, restic.IndexFile, func(id restic.ID, size int64) error {
|
||||
idx, err := repository.LoadIndex(opts.ctx, repo, id)
|
||||
return repository.ForAllIndexes(opts.ctx, repo, func(id restic.ID, idx *repository.Index, oldFormat bool, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -70,7 +69,6 @@ func runList(cmd *cobra.Command, opts GlobalOptions, args []string) error {
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
default:
|
||||
return errors.Fatal("invalid type")
|
||||
}
|
||||
|
||||
@@ -159,16 +159,19 @@ func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
|
||||
enc := json.NewEncoder(gopts.stdout)
|
||||
|
||||
printSnapshot = func(sn *restic.Snapshot) {
|
||||
enc.Encode(lsSnapshot{
|
||||
err = enc.Encode(lsSnapshot{
|
||||
Snapshot: sn,
|
||||
ID: sn.ID(),
|
||||
ShortID: sn.ID().Str(),
|
||||
StructType: "snapshot",
|
||||
})
|
||||
if err != nil {
|
||||
Warnf("JSON encode failed: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
printNode = func(path string, node *restic.Node) {
|
||||
enc.Encode(lsNode{
|
||||
err = enc.Encode(lsNode{
|
||||
Name: node.Name,
|
||||
Type: node.Type,
|
||||
Path: path,
|
||||
@@ -181,6 +184,9 @@ func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
|
||||
ChangeTime: node.ChangeTime,
|
||||
StructType: "node",
|
||||
})
|
||||
if err != nil {
|
||||
Warnf("JSON encode failed: %v\n", err)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
printSnapshot = func(sn *restic.Snapshot) {
|
||||
|
||||
@@ -116,11 +116,8 @@ func runMount(opts MountOptions, gopts GlobalOptions, args []string) error {
|
||||
mountpoint := args[0]
|
||||
|
||||
if _, err := resticfs.Stat(mountpoint); os.IsNotExist(errors.Cause(err)) {
|
||||
Verbosef("Mountpoint %s doesn't exist, creating it\n", mountpoint)
|
||||
err = resticfs.Mkdir(mountpoint, os.ModeDir|0700)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
Verbosef("Mountpoint %s doesn't exist\n", mountpoint)
|
||||
return err
|
||||
}
|
||||
mountOptions := []systemFuse.MountOption{
|
||||
systemFuse.ReadOnly(),
|
||||
|
||||
@@ -23,8 +23,14 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
||||
DisableAutoGenTag: true,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
fmt.Printf("All Extended Options:\n")
|
||||
var maxLen int
|
||||
for _, opt := range options.List() {
|
||||
fmt.Printf(" %-15s %s\n", opt.Namespace+"."+opt.Name, opt.Text)
|
||||
if l := len(opt.Namespace + "." + opt.Name); l > maxLen {
|
||||
maxLen = l
|
||||
}
|
||||
}
|
||||
for _, opt := range options.List() {
|
||||
fmt.Printf(" %*s %s\n", -maxLen, opt.Namespace+"."+opt.Name, opt.Text)
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
@@ -1,15 +1,23 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"math"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/restic/restic/internal/debug"
|
||||
"github.com/restic/restic/internal/errors"
|
||||
"github.com/restic/restic/internal/index"
|
||||
"github.com/restic/restic/internal/repository"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var errorIndexIncomplete = errors.Fatal("index is not complete")
|
||||
var errorPacksMissing = errors.Fatal("packs from index missing in repo")
|
||||
var errorSizeNotMatching = errors.Fatal("pack size does not match calculated size from index")
|
||||
|
||||
var cmdPrune = &cobra.Command{
|
||||
Use: "prune [flags]",
|
||||
Short: "Remove unneeded data from the repository",
|
||||
@@ -24,12 +32,91 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runPrune(globalOptions)
|
||||
return runPrune(pruneOptions, globalOptions)
|
||||
},
|
||||
}
|
||||
|
||||
// PruneOptions collects all options for the cleanup command.
|
||||
type PruneOptions struct {
|
||||
DryRun bool
|
||||
|
||||
MaxUnused string
|
||||
maxUnusedBytes func(used uint64) (unused uint64) // calculates the number of unused bytes after repacking, according to MaxUnused
|
||||
|
||||
MaxRepackSize string
|
||||
MaxRepackBytes uint64
|
||||
|
||||
RepackCachableOnly bool
|
||||
}
|
||||
|
||||
var pruneOptions PruneOptions
|
||||
|
||||
func init() {
|
||||
cmdRoot.AddCommand(cmdPrune)
|
||||
f := cmdPrune.Flags()
|
||||
f.BoolVarP(&pruneOptions.DryRun, "dry-run", "n", false, "do not modify the repository, just print what would be done")
|
||||
addPruneOptions(cmdPrune)
|
||||
}
|
||||
|
||||
func addPruneOptions(c *cobra.Command) {
|
||||
f := c.Flags()
|
||||
f.StringVar(&pruneOptions.MaxUnused, "max-unused", "5%", "tolerate given `limit` of unused data (absolute value in bytes with suffixes k/K, m/M, g/G, t/T, a value in % or the word 'unlimited')")
|
||||
f.StringVar(&pruneOptions.MaxRepackSize, "max-repack-size", "", "maximum `size` to repack (allowed suffixes: k/K, m/M, g/G, t/T)")
|
||||
f.BoolVar(&pruneOptions.RepackCachableOnly, "repack-cacheable-only", false, "only repack packs which are cacheable")
|
||||
}
|
||||
|
||||
func verifyPruneOptions(opts *PruneOptions) error {
|
||||
if len(opts.MaxRepackSize) > 0 {
|
||||
size, err := parseSizeStr(opts.MaxRepackSize)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
opts.MaxRepackBytes = uint64(size)
|
||||
}
|
||||
|
||||
maxUnused := strings.TrimSpace(opts.MaxUnused)
|
||||
if maxUnused == "" {
|
||||
return errors.Fatalf("invalid value for --max-unused: %q", opts.MaxUnused)
|
||||
}
|
||||
|
||||
// parse MaxUnused either as unlimited, a percentage, or an absolute number of bytes
|
||||
switch {
|
||||
case maxUnused == "unlimited":
|
||||
opts.maxUnusedBytes = func(used uint64) uint64 {
|
||||
return math.MaxUint64
|
||||
}
|
||||
|
||||
case strings.HasSuffix(maxUnused, "%"):
|
||||
maxUnused = strings.TrimSuffix(maxUnused, "%")
|
||||
p, err := strconv.ParseFloat(maxUnused, 64)
|
||||
if err != nil {
|
||||
return errors.Fatalf("invalid percentage %q passed for --max-unused: %v", opts.MaxUnused, err)
|
||||
}
|
||||
|
||||
if p < 0 {
|
||||
return errors.Fatal("percentage for --max-unused must be positive")
|
||||
}
|
||||
|
||||
if p >= 100 {
|
||||
return errors.Fatal("percentage for --max-unused must be below 100%")
|
||||
}
|
||||
|
||||
opts.maxUnusedBytes = func(used uint64) uint64 {
|
||||
return uint64(p / (100 - p) * float64(used))
|
||||
}
|
||||
|
||||
default:
|
||||
size, err := parseSizeStr(maxUnused)
|
||||
if err != nil {
|
||||
return errors.Fatalf("invalid number of bytes %q for --max-unused: %v", opts.MaxUnused, err)
|
||||
}
|
||||
|
||||
opts.maxUnusedBytes = func(used uint64) uint64 {
|
||||
return uint64(size)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func shortenStatus(maxLength int, s string) string {
|
||||
@@ -44,7 +131,12 @@ func shortenStatus(maxLength int, s string) string {
|
||||
return s[:maxLength-3] + "..."
|
||||
}
|
||||
|
||||
func runPrune(gopts GlobalOptions) error {
|
||||
func runPrune(opts PruneOptions, gopts GlobalOptions) error {
|
||||
err := verifyPruneOptions(&opts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
repo, err := OpenRepository(gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -56,203 +148,398 @@ func runPrune(gopts GlobalOptions) error {
|
||||
return err
|
||||
}
|
||||
|
||||
return runPruneWithRepo(opts, gopts, repo, restic.NewIDSet())
|
||||
}
|
||||
|
||||
func runPruneWithRepo(opts PruneOptions, gopts GlobalOptions, repo *repository.Repository, ignoreSnapshots restic.IDSet) error {
|
||||
// we do not need index updates while pruning!
|
||||
repo.DisableAutoIndexUpdate()
|
||||
|
||||
return pruneRepository(gopts, repo)
|
||||
}
|
||||
|
||||
func mixedBlobs(list []restic.Blob) bool {
|
||||
var tree, data bool
|
||||
|
||||
for _, pb := range list {
|
||||
switch pb.Type {
|
||||
case restic.TreeBlob:
|
||||
tree = true
|
||||
case restic.DataBlob:
|
||||
data = true
|
||||
}
|
||||
|
||||
if tree && data {
|
||||
return true
|
||||
}
|
||||
if repo.Cache == nil {
|
||||
Print("warning: running prune without a cache, this may be very slow!\n")
|
||||
}
|
||||
|
||||
return false
|
||||
Verbosef("loading indexes...\n")
|
||||
err := repo.LoadIndex(gopts.ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
usedBlobs, err := getUsedBlobs(gopts, repo, ignoreSnapshots)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return prune(opts, gopts, repo, usedBlobs)
|
||||
}
|
||||
|
||||
func pruneRepository(gopts GlobalOptions, repo restic.Repository) error {
|
||||
type packInfo struct {
|
||||
usedBlobs uint
|
||||
unusedBlobs uint
|
||||
duplicateBlobs uint
|
||||
usedSize uint64
|
||||
unusedSize uint64
|
||||
tpe restic.BlobType
|
||||
}
|
||||
|
||||
type packInfoWithID struct {
|
||||
ID restic.ID
|
||||
packInfo
|
||||
}
|
||||
|
||||
// prune selects which files to rewrite and then does that. The map usedBlobs is
|
||||
// modified in the process.
|
||||
func prune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, usedBlobs restic.BlobSet) error {
|
||||
ctx := gopts.ctx
|
||||
|
||||
err := repo.LoadIndex(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var stats struct {
|
||||
blobs int
|
||||
packs int
|
||||
snapshots int
|
||||
bytes int64
|
||||
blobs struct {
|
||||
used uint
|
||||
duplicate uint
|
||||
unused uint
|
||||
remove uint
|
||||
repack uint
|
||||
repackrm uint
|
||||
}
|
||||
size struct {
|
||||
used uint64
|
||||
duplicate uint64
|
||||
unused uint64
|
||||
remove uint64
|
||||
repack uint64
|
||||
repackrm uint64
|
||||
unref uint64
|
||||
}
|
||||
packs struct {
|
||||
used uint
|
||||
unused uint
|
||||
partlyUsed uint
|
||||
keep uint
|
||||
}
|
||||
}
|
||||
|
||||
Verbosef("counting files in repo\n")
|
||||
err = repo.List(ctx, restic.PackFile, func(restic.ID, int64) error {
|
||||
stats.packs++
|
||||
Verbosef("searching used packs...\n")
|
||||
|
||||
keepBlobs := restic.NewBlobSet()
|
||||
duplicateBlobs := restic.NewBlobSet()
|
||||
|
||||
// iterate over all blobs in index to find out which blobs are duplicates
|
||||
for blob := range repo.Index().Each(ctx) {
|
||||
bh := blob.BlobHandle
|
||||
size := uint64(blob.Length)
|
||||
switch {
|
||||
case usedBlobs.Has(bh): // used blob, move to keepBlobs
|
||||
usedBlobs.Delete(bh)
|
||||
keepBlobs.Insert(bh)
|
||||
stats.size.used += size
|
||||
stats.blobs.used++
|
||||
case keepBlobs.Has(bh): // duplicate blob
|
||||
duplicateBlobs.Insert(bh)
|
||||
stats.size.duplicate += size
|
||||
stats.blobs.duplicate++
|
||||
default:
|
||||
stats.size.unused += size
|
||||
stats.blobs.unused++
|
||||
}
|
||||
}
|
||||
|
||||
// Check if all used blobs have been found in index
|
||||
if len(usedBlobs) != 0 {
|
||||
Warnf("%v not found in the index\n\n"+
|
||||
"Integrity check failed: Data seems to be missing.\n"+
|
||||
"Will not start prune to prevent (additional) data loss!\n"+
|
||||
"Please report this error (along with the output of the 'prune' run) at\n"+
|
||||
"https://github.com/restic/restic/issues/new/choose", usedBlobs)
|
||||
return errorIndexIncomplete
|
||||
}
|
||||
|
||||
indexPack := make(map[restic.ID]packInfo)
|
||||
|
||||
// save computed pack header size
|
||||
for pid, hdrSize := range repo.Index().PackSize(ctx, true) {
|
||||
// initialize tpe with NumBlobTypes to indicate it's not set
|
||||
indexPack[pid] = packInfo{tpe: restic.NumBlobTypes, usedSize: uint64(hdrSize)}
|
||||
}
|
||||
|
||||
// iterate over all blobs in index to generate packInfo
|
||||
for blob := range repo.Index().Each(ctx) {
|
||||
ip := indexPack[blob.PackID]
|
||||
|
||||
// Set blob type if not yet set
|
||||
if ip.tpe == restic.NumBlobTypes {
|
||||
ip.tpe = blob.Type
|
||||
}
|
||||
|
||||
// mark mixed packs with "Invalid blob type"
|
||||
if ip.tpe != blob.Type {
|
||||
ip.tpe = restic.InvalidBlob
|
||||
}
|
||||
|
||||
bh := blob.BlobHandle
|
||||
size := uint64(blob.Length)
|
||||
switch {
|
||||
case duplicateBlobs.Has(bh): // duplicate blob
|
||||
ip.usedSize += size
|
||||
ip.duplicateBlobs++
|
||||
case keepBlobs.Has(bh): // used blob, not duplicate
|
||||
ip.usedSize += size
|
||||
ip.usedBlobs++
|
||||
default: // unused blob
|
||||
ip.unusedSize += size
|
||||
ip.unusedBlobs++
|
||||
}
|
||||
// update indexPack
|
||||
indexPack[blob.PackID] = ip
|
||||
}
|
||||
|
||||
Verbosef("collecting packs for deletion and repacking\n")
|
||||
removePacksFirst := restic.NewIDSet()
|
||||
removePacks := restic.NewIDSet()
|
||||
repackPacks := restic.NewIDSet()
|
||||
|
||||
var repackCandidates []packInfoWithID
|
||||
repackAllPacksWithDuplicates := true
|
||||
|
||||
keep := func(p packInfo) {
|
||||
stats.packs.keep++
|
||||
if p.duplicateBlobs > 0 {
|
||||
repackAllPacksWithDuplicates = false
|
||||
}
|
||||
}
|
||||
|
||||
// loop over all packs and decide what to do
|
||||
bar := newProgressMax(!gopts.Quiet, uint64(len(indexPack)), "packs processed")
|
||||
err := repo.List(ctx, restic.PackFile, func(id restic.ID, packSize int64) error {
|
||||
p, ok := indexPack[id]
|
||||
if !ok {
|
||||
// Pack was not referenced in index and is not used => immediately remove!
|
||||
Verboseff("will remove pack %v as it is unused and not indexed\n", id.Str())
|
||||
removePacksFirst.Insert(id)
|
||||
stats.size.unref += uint64(packSize)
|
||||
return nil
|
||||
}
|
||||
|
||||
if p.unusedSize+p.usedSize != uint64(packSize) &&
|
||||
!(p.usedBlobs == 0 && p.duplicateBlobs == 0) {
|
||||
// Pack size does not fit and pack is needed => error
|
||||
// If the pack is not needed, this is no error, the pack can
|
||||
// and will be simply removed, see below.
|
||||
Warnf("pack %s: calculated size %d does not match real size %d\nRun 'restic rebuild-index'.",
|
||||
id.Str(), p.unusedSize+p.usedSize, packSize)
|
||||
return errorSizeNotMatching
|
||||
}
|
||||
|
||||
// statistics
|
||||
switch {
|
||||
case p.usedBlobs == 0 && p.duplicateBlobs == 0:
|
||||
stats.packs.unused++
|
||||
case p.unusedBlobs == 0:
|
||||
stats.packs.used++
|
||||
default:
|
||||
stats.packs.partlyUsed++
|
||||
}
|
||||
|
||||
// decide what to do
|
||||
switch {
|
||||
case p.usedBlobs == 0 && p.duplicateBlobs == 0:
|
||||
// All blobs in pack are no longer used => remove pack!
|
||||
removePacks.Insert(id)
|
||||
stats.blobs.remove += p.unusedBlobs
|
||||
stats.size.remove += p.unusedSize
|
||||
|
||||
case opts.RepackCachableOnly && p.tpe == restic.DataBlob:
|
||||
// if this is a data pack and --repack-cacheable-only is set => keep pack!
|
||||
keep(p)
|
||||
|
||||
case p.unusedBlobs == 0 && p.duplicateBlobs == 0 && p.tpe != restic.InvalidBlob:
|
||||
// All blobs in pack are used and not duplicates/mixed => keep pack!
|
||||
keep(p)
|
||||
|
||||
default:
|
||||
// all other packs are candidates for repacking
|
||||
repackCandidates = append(repackCandidates, packInfoWithID{ID: id, packInfo: p})
|
||||
}
|
||||
|
||||
delete(indexPack, id)
|
||||
bar.Add(1)
|
||||
return nil
|
||||
})
|
||||
bar.Done()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
Verbosef("building new index for repo\n")
|
||||
// At this point indexPacks contains only missing packs!
|
||||
|
||||
bar := newProgressMax(!gopts.Quiet, uint64(stats.packs), "packs")
|
||||
idx, invalidFiles, err := index.New(ctx, repo, restic.NewIDSet(), bar)
|
||||
if err != nil {
|
||||
return err
|
||||
// missing packs that are not needed can be ignored
|
||||
ignorePacks := restic.NewIDSet()
|
||||
for id, p := range indexPack {
|
||||
if p.usedBlobs == 0 && p.duplicateBlobs == 0 {
|
||||
ignorePacks.Insert(id)
|
||||
stats.blobs.remove += p.unusedBlobs
|
||||
stats.size.remove += p.unusedSize
|
||||
delete(indexPack, id)
|
||||
}
|
||||
}
|
||||
|
||||
for _, id := range invalidFiles {
|
||||
Warnf("incomplete pack file (will be removed): %v\n", id)
|
||||
if len(indexPack) != 0 {
|
||||
Warnf("The index references %d needed pack files which are missing from the repository:\n", len(indexPack))
|
||||
for id := range indexPack {
|
||||
Warnf(" %v\n", id)
|
||||
}
|
||||
return errorPacksMissing
|
||||
}
|
||||
if len(ignorePacks) != 0 {
|
||||
Warnf("Missing but unneeded pack files are referenced in the index, will be repaired\n")
|
||||
for id := range ignorePacks {
|
||||
Warnf("will forget missing pack file %v\n", id)
|
||||
}
|
||||
}
|
||||
|
||||
blobs := 0
|
||||
for _, pack := range idx.Packs {
|
||||
stats.bytes += pack.Size
|
||||
blobs += len(pack.Entries)
|
||||
// calculate limit for number of unused bytes in the repo after repacking
|
||||
maxUnusedSizeAfter := opts.maxUnusedBytes(stats.size.used)
|
||||
|
||||
// Sort repackCandidates such that packs with highest ratio unused/used space are picked first.
|
||||
// This is equivalent to sorting by unused / total space.
|
||||
// Instead of unused[i] / used[i] > unused[j] / used[j] we use
|
||||
// unused[i] * used[j] > unused[j] * used[i] as uint32*uint32 < uint64
|
||||
// Morover duplicates and packs containing trees are sorted to the beginning
|
||||
sort.Slice(repackCandidates, func(i, j int) bool {
|
||||
pi := repackCandidates[i].packInfo
|
||||
pj := repackCandidates[j].packInfo
|
||||
switch {
|
||||
case pi.duplicateBlobs > 0 && pj.duplicateBlobs == 0:
|
||||
return true
|
||||
case pj.duplicateBlobs > 0 && pi.duplicateBlobs == 0:
|
||||
return false
|
||||
case pi.tpe != restic.DataBlob && pj.tpe == restic.DataBlob:
|
||||
return true
|
||||
case pj.tpe != restic.DataBlob && pi.tpe == restic.DataBlob:
|
||||
return false
|
||||
}
|
||||
return pi.unusedSize*pj.usedSize > pj.unusedSize*pi.usedSize
|
||||
})
|
||||
|
||||
repack := func(id restic.ID, p packInfo) {
|
||||
repackPacks.Insert(id)
|
||||
stats.blobs.repack += p.unusedBlobs + p.duplicateBlobs + p.usedBlobs
|
||||
stats.size.repack += p.unusedSize + p.usedSize
|
||||
stats.blobs.repackrm += p.unusedBlobs
|
||||
stats.size.repackrm += p.unusedSize
|
||||
}
|
||||
Verbosef("repository contains %v packs (%v blobs) with %v\n",
|
||||
len(idx.Packs), blobs, formatBytes(uint64(stats.bytes)))
|
||||
|
||||
blobCount := make(map[restic.BlobHandle]int)
|
||||
var duplicateBlobs uint64
|
||||
var duplicateBytes uint64
|
||||
for _, p := range repackCandidates {
|
||||
reachedUnusedSizeAfter := (stats.size.unused-stats.size.remove-stats.size.repackrm < maxUnusedSizeAfter)
|
||||
|
||||
// find duplicate blobs
|
||||
for _, p := range idx.Packs {
|
||||
for _, entry := range p.Entries {
|
||||
stats.blobs++
|
||||
h := restic.BlobHandle{ID: entry.ID, Type: entry.Type}
|
||||
blobCount[h]++
|
||||
reachedRepackSize := false
|
||||
if opts.MaxRepackBytes > 0 {
|
||||
reachedRepackSize = stats.size.repack+p.unusedSize+p.usedSize > opts.MaxRepackBytes
|
||||
}
|
||||
|
||||
if blobCount[h] > 1 {
|
||||
duplicateBlobs++
|
||||
duplicateBytes += uint64(entry.Length)
|
||||
switch {
|
||||
case reachedRepackSize:
|
||||
keep(p.packInfo)
|
||||
|
||||
case p.duplicateBlobs > 0, p.tpe != restic.DataBlob:
|
||||
// repacking duplicates/non-data is only limited by repackSize
|
||||
repack(p.ID, p.packInfo)
|
||||
|
||||
case reachedUnusedSizeAfter:
|
||||
// for all other packs stop repacking if tolerated unused size is reached.
|
||||
keep(p.packInfo)
|
||||
|
||||
default:
|
||||
repack(p.ID, p.packInfo)
|
||||
}
|
||||
}
|
||||
|
||||
// if all duplicates are repacked, print out correct statistics
|
||||
if repackAllPacksWithDuplicates {
|
||||
stats.blobs.repackrm += stats.blobs.duplicate
|
||||
stats.size.repackrm += stats.size.duplicate
|
||||
}
|
||||
|
||||
Verboseff("\nused: %10d blobs / %s\n", stats.blobs.used, formatBytes(stats.size.used))
|
||||
if stats.blobs.duplicate > 0 {
|
||||
Verboseff("duplicates: %10d blobs / %s\n", stats.blobs.duplicate, formatBytes(stats.size.duplicate))
|
||||
}
|
||||
Verboseff("unused: %10d blobs / %s\n", stats.blobs.unused, formatBytes(stats.size.unused))
|
||||
if stats.size.unref > 0 {
|
||||
Verboseff("unreferenced: %s\n", formatBytes(stats.size.unref))
|
||||
}
|
||||
totalBlobs := stats.blobs.used + stats.blobs.unused + stats.blobs.duplicate
|
||||
totalSize := stats.size.used + stats.size.duplicate + stats.size.unused + stats.size.unref
|
||||
unusedSize := stats.size.duplicate + stats.size.unused
|
||||
Verboseff("total: %10d blobs / %s\n", totalBlobs, formatBytes(totalSize))
|
||||
Verboseff("unused size: %s of total size\n", formatPercent(unusedSize, totalSize))
|
||||
|
||||
Verbosef("\nto repack: %10d blobs / %s\n", stats.blobs.repack, formatBytes(stats.size.repack))
|
||||
Verbosef("this removes %10d blobs / %s\n", stats.blobs.repackrm, formatBytes(stats.size.repackrm))
|
||||
Verbosef("to delete: %10d blobs / %s\n", stats.blobs.remove, formatBytes(stats.size.remove+stats.size.unref))
|
||||
totalPruneSize := stats.size.remove + stats.size.repackrm + stats.size.unref
|
||||
Verbosef("total prune: %10d blobs / %s\n", stats.blobs.remove+stats.blobs.repackrm, formatBytes(totalPruneSize))
|
||||
Verbosef("remaining: %10d blobs / %s\n", totalBlobs-(stats.blobs.remove+stats.blobs.repackrm), formatBytes(totalSize-totalPruneSize))
|
||||
unusedAfter := unusedSize - stats.size.remove - stats.size.repackrm
|
||||
Verbosef("unused size after prune: %s (%s of remaining size)\n",
|
||||
formatBytes(unusedAfter), formatPercent(unusedAfter, totalSize-totalPruneSize))
|
||||
Verbosef("\n")
|
||||
Verboseff("totally used packs: %10d\n", stats.packs.used)
|
||||
Verboseff("partly used packs: %10d\n", stats.packs.partlyUsed)
|
||||
Verboseff("unused packs: %10d\n\n", stats.packs.unused)
|
||||
|
||||
Verboseff("to keep: %10d packs\n", stats.packs.keep)
|
||||
Verboseff("to repack: %10d packs\n", len(repackPacks))
|
||||
Verboseff("to delete: %10d packs\n", len(removePacks))
|
||||
if len(removePacksFirst) > 0 {
|
||||
Verboseff("to delete: %10d unreferenced packs\n\n", len(removePacksFirst))
|
||||
}
|
||||
|
||||
if opts.DryRun {
|
||||
if !gopts.JSON && gopts.verbosity >= 2 {
|
||||
if len(removePacksFirst) > 0 {
|
||||
Printf("Would have removed the following unreferenced packs:\n%v\n\n", removePacksFirst)
|
||||
}
|
||||
Printf("Would have repacked and removed the following packs:\n%v\n\n", repackPacks)
|
||||
Printf("Would have removed the following no longer used packs:\n%v\n\n", removePacks)
|
||||
}
|
||||
// Always quit here if DryRun was set!
|
||||
return nil
|
||||
}
|
||||
|
||||
Verbosef("processed %d blobs: %d duplicate blobs, %v duplicate\n",
|
||||
stats.blobs, duplicateBlobs, formatBytes(uint64(duplicateBytes)))
|
||||
Verbosef("load all snapshots\n")
|
||||
|
||||
// find referenced blobs
|
||||
snapshots, err := restic.LoadAllSnapshots(ctx, repo)
|
||||
if err != nil {
|
||||
return err
|
||||
// unreferenced packs can be safely deleted first
|
||||
if len(removePacksFirst) != 0 {
|
||||
Verbosef("deleting unreferenced packs\n")
|
||||
DeleteFiles(gopts, repo, removePacksFirst, restic.PackFile)
|
||||
}
|
||||
|
||||
stats.snapshots = len(snapshots)
|
||||
|
||||
usedBlobs, err := getUsedBlobs(gopts, repo, snapshots)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var missingBlobs []restic.BlobHandle
|
||||
for h := range usedBlobs {
|
||||
if _, ok := blobCount[h]; !ok {
|
||||
missingBlobs = append(missingBlobs, h)
|
||||
}
|
||||
}
|
||||
if len(missingBlobs) > 0 {
|
||||
return errors.Fatalf("%v not found in the new index\n"+
|
||||
"Data blobs seem to be missing, aborting prune to prevent further data loss!\n"+
|
||||
"Please report this error (along with the output of the 'prune' run) at\n"+
|
||||
"https://github.com/restic/restic/issues/new/choose", missingBlobs)
|
||||
}
|
||||
|
||||
Verbosef("found %d of %d data blobs still in use, removing %d blobs\n",
|
||||
len(usedBlobs), stats.blobs, stats.blobs-len(usedBlobs))
|
||||
|
||||
// find packs that need a rewrite
|
||||
rewritePacks := restic.NewIDSet()
|
||||
for _, pack := range idx.Packs {
|
||||
if mixedBlobs(pack.Entries) {
|
||||
rewritePacks.Insert(pack.ID)
|
||||
continue
|
||||
}
|
||||
|
||||
for _, blob := range pack.Entries {
|
||||
h := restic.BlobHandle{ID: blob.ID, Type: blob.Type}
|
||||
if !usedBlobs.Has(h) {
|
||||
rewritePacks.Insert(pack.ID)
|
||||
continue
|
||||
}
|
||||
|
||||
if blobCount[h] > 1 {
|
||||
rewritePacks.Insert(pack.ID)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
removeBytes := duplicateBytes
|
||||
|
||||
// find packs that are unneeded
|
||||
removePacks := restic.NewIDSet()
|
||||
|
||||
Verbosef("will remove %d invalid files\n", len(invalidFiles))
|
||||
for _, id := range invalidFiles {
|
||||
removePacks.Insert(id)
|
||||
}
|
||||
|
||||
for packID, p := range idx.Packs {
|
||||
|
||||
hasActiveBlob := false
|
||||
for _, blob := range p.Entries {
|
||||
h := restic.BlobHandle{ID: blob.ID, Type: blob.Type}
|
||||
if usedBlobs.Has(h) {
|
||||
hasActiveBlob = true
|
||||
continue
|
||||
}
|
||||
|
||||
removeBytes += uint64(blob.Length)
|
||||
}
|
||||
|
||||
if hasActiveBlob {
|
||||
continue
|
||||
}
|
||||
|
||||
removePacks.Insert(packID)
|
||||
|
||||
if !rewritePacks.Has(packID) {
|
||||
return errors.Fatalf("pack %v is unneeded, but not contained in rewritePacks", packID.Str())
|
||||
}
|
||||
|
||||
rewritePacks.Delete(packID)
|
||||
}
|
||||
|
||||
Verbosef("will delete %d packs and rewrite %d packs, this frees %s\n",
|
||||
len(removePacks), len(rewritePacks), formatBytes(uint64(removeBytes)))
|
||||
|
||||
var obsoletePacks restic.IDSet
|
||||
if len(rewritePacks) != 0 {
|
||||
bar := newProgressMax(!gopts.Quiet, uint64(len(rewritePacks)), "packs rewritten")
|
||||
obsoletePacks, err = repository.Repack(ctx, repo, rewritePacks, usedBlobs, bar)
|
||||
if len(repackPacks) != 0 {
|
||||
Verbosef("repacking packs\n")
|
||||
bar := newProgressMax(!gopts.Quiet, uint64(len(repackPacks)), "packs repacked")
|
||||
_, err := repository.Repack(ctx, repo, repackPacks, keepBlobs, bar)
|
||||
bar.Done()
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.Fatalf("%s", err)
|
||||
}
|
||||
|
||||
// Also remove repacked packs
|
||||
removePacks.Merge(repackPacks)
|
||||
}
|
||||
|
||||
removePacks.Merge(obsoletePacks)
|
||||
if len(ignorePacks) == 0 {
|
||||
ignorePacks = removePacks
|
||||
} else {
|
||||
ignorePacks.Merge(removePacks)
|
||||
}
|
||||
|
||||
if err = rebuildIndex(ctx, repo, removePacks); err != nil {
|
||||
return err
|
||||
if len(ignorePacks) != 0 {
|
||||
err = rebuildIndexFiles(gopts, repo, ignorePacks, nil)
|
||||
if err != nil {
|
||||
return errors.Fatalf("%s", err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(removePacks) != 0 {
|
||||
Verbosef("remove %d old packs\n", len(removePacks))
|
||||
Verbosef("removing %d old packs\n", len(removePacks))
|
||||
DeleteFiles(gopts, repo, removePacks, restic.PackFile)
|
||||
}
|
||||
|
||||
@@ -260,30 +547,54 @@ func pruneRepository(gopts GlobalOptions, repo restic.Repository) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func getUsedBlobs(gopts GlobalOptions, repo restic.Repository, snapshots []*restic.Snapshot) (usedBlobs restic.BlobSet, err error) {
|
||||
func rebuildIndexFiles(gopts GlobalOptions, repo restic.Repository, removePacks restic.IDSet, extraObsolete restic.IDs) error {
|
||||
Verbosef("rebuilding index\n")
|
||||
|
||||
idx := (repo.Index()).(*repository.MasterIndex)
|
||||
packcount := uint64(len(idx.Packs(removePacks)))
|
||||
bar := newProgressMax(!gopts.Quiet, packcount, "packs processed")
|
||||
obsoleteIndexes, err := idx.Save(gopts.ctx, repo, removePacks, extraObsolete, bar)
|
||||
bar.Done()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
Verbosef("deleting obsolete index files\n")
|
||||
return DeleteFilesChecked(gopts, repo, obsoleteIndexes, restic.IndexFile)
|
||||
}
|
||||
|
||||
func getUsedBlobs(gopts GlobalOptions, repo restic.Repository, ignoreSnapshots restic.IDSet) (usedBlobs restic.BlobSet, err error) {
|
||||
ctx := gopts.ctx
|
||||
|
||||
Verbosef("find data that is still in use for %d snapshots\n", len(snapshots))
|
||||
var snapshotTrees restic.IDs
|
||||
Verbosef("loading all snapshots...\n")
|
||||
err = restic.ForAllSnapshots(gopts.ctx, repo, ignoreSnapshots,
|
||||
func(id restic.ID, sn *restic.Snapshot, err error) error {
|
||||
debug.Log("add snapshot %v (tree %v, error %v)", id, *sn.Tree, err)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
snapshotTrees = append(snapshotTrees, *sn.Tree)
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
Verbosef("finding data that is still in use for %d snapshots\n", len(snapshotTrees))
|
||||
|
||||
usedBlobs = restic.NewBlobSet()
|
||||
|
||||
bar := newProgressMax(!gopts.Quiet, uint64(len(snapshots)), "snapshots")
|
||||
bar.Start()
|
||||
bar := newProgressMax(!gopts.Quiet, uint64(len(snapshotTrees)), "snapshots")
|
||||
defer bar.Done()
|
||||
for _, sn := range snapshots {
|
||||
debug.Log("process snapshot %v", sn.ID())
|
||||
|
||||
err = restic.FindUsedBlobs(ctx, repo, *sn.Tree, usedBlobs)
|
||||
if err != nil {
|
||||
if repo.Backend().IsNotExist(err) {
|
||||
return nil, errors.Fatal("unable to load a tree from the repo: " + err.Error())
|
||||
}
|
||||
|
||||
return nil, err
|
||||
err = restic.FindUsedBlobs(ctx, repo, snapshotTrees, usedBlobs, bar)
|
||||
if err != nil {
|
||||
if repo.Backend().IsNotExist(err) {
|
||||
return nil, errors.Fatal("unable to load a tree from the repo: " + err.Error())
|
||||
}
|
||||
|
||||
debug.Log("processed snapshot %v", sn.ID())
|
||||
bar.Report(restic.Stat{Blobs: 1})
|
||||
return nil, err
|
||||
}
|
||||
return usedBlobs, nil
|
||||
}
|
||||
|
||||
@@ -1,10 +1,7 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/restic/restic/internal/errors"
|
||||
"github.com/restic/restic/internal/index"
|
||||
"github.com/restic/restic/internal/repository"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
@@ -12,7 +9,7 @@ import (
|
||||
|
||||
var cmdRebuildIndex = &cobra.Command{
|
||||
Use: "rebuild-index [flags]",
|
||||
Short: "Build a new index file",
|
||||
Short: "Build a new index",
|
||||
Long: `
|
||||
The "rebuild-index" command creates a new index based on the pack files in the
|
||||
repository.
|
||||
@@ -24,15 +21,25 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runRebuildIndex(globalOptions)
|
||||
return runRebuildIndex(rebuildIndexOptions, globalOptions)
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
cmdRoot.AddCommand(cmdRebuildIndex)
|
||||
// RebuildIndexOptions collects all options for the rebuild-index command.
|
||||
type RebuildIndexOptions struct {
|
||||
ReadAllPacks bool
|
||||
}
|
||||
|
||||
func runRebuildIndex(gopts GlobalOptions) error {
|
||||
var rebuildIndexOptions RebuildIndexOptions
|
||||
|
||||
func init() {
|
||||
cmdRoot.AddCommand(cmdRebuildIndex)
|
||||
f := cmdRebuildIndex.Flags()
|
||||
f.BoolVar(&rebuildIndexOptions.ReadAllPacks, "read-all-packs", false, "read all pack files to generate new index from scratch")
|
||||
|
||||
}
|
||||
|
||||
func runRebuildIndex(opts RebuildIndexOptions, gopts GlobalOptions) error {
|
||||
repo, err := OpenRepository(gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -44,58 +51,80 @@ func runRebuildIndex(gopts GlobalOptions) error {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(gopts.ctx)
|
||||
defer cancel()
|
||||
return rebuildIndex(ctx, repo, restic.NewIDSet())
|
||||
return rebuildIndex(opts, gopts, repo, restic.NewIDSet())
|
||||
}
|
||||
|
||||
func rebuildIndex(ctx context.Context, repo restic.Repository, ignorePacks restic.IDSet) error {
|
||||
Verbosef("counting files in repo\n")
|
||||
func rebuildIndex(opts RebuildIndexOptions, gopts GlobalOptions, repo *repository.Repository, ignorePacks restic.IDSet) error {
|
||||
ctx := gopts.ctx
|
||||
|
||||
var packs uint64
|
||||
err := repo.List(ctx, restic.PackFile, func(restic.ID, int64) error {
|
||||
packs++
|
||||
var obsoleteIndexes restic.IDs
|
||||
packSizeFromList := make(map[restic.ID]int64)
|
||||
packSizeFromIndex := make(map[restic.ID]int64)
|
||||
removePacks := restic.NewIDSet()
|
||||
|
||||
if opts.ReadAllPacks {
|
||||
// get list of old index files but start with empty index
|
||||
err := repo.List(ctx, restic.IndexFile, func(id restic.ID, size int64) error {
|
||||
obsoleteIndexes = append(obsoleteIndexes, id)
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
Verbosef("loading indexes...\n")
|
||||
err := repo.LoadIndex(gopts.ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
packSizeFromIndex = repo.Index().PackSize(ctx, false)
|
||||
}
|
||||
|
||||
Verbosef("getting pack files to read...\n")
|
||||
err := repo.List(ctx, restic.PackFile, func(id restic.ID, packSize int64) error {
|
||||
size, ok := packSizeFromIndex[id]
|
||||
if !ok || size != packSize {
|
||||
// Pack was not referenced in index or size does not match
|
||||
packSizeFromList[id] = packSize
|
||||
removePacks.Insert(id)
|
||||
}
|
||||
if !ok {
|
||||
Warnf("adding pack file to index %v\n", id)
|
||||
} else if size != packSize {
|
||||
Warnf("reindexing pack file %v with unexpected size %v instead of %v\n", id, packSize, size)
|
||||
}
|
||||
delete(packSizeFromIndex, id)
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
bar := newProgressMax(!globalOptions.Quiet, packs-uint64(len(ignorePacks)), "packs")
|
||||
idx, invalidFiles, err := index.New(ctx, repo, ignorePacks, bar)
|
||||
if err != nil {
|
||||
return err
|
||||
for id := range packSizeFromIndex {
|
||||
// forget pack files that are referenced in the index but do not exist
|
||||
// when rebuilding the index
|
||||
removePacks.Insert(id)
|
||||
Warnf("removing not found pack file %v\n", id)
|
||||
}
|
||||
|
||||
if globalOptions.verbosity >= 2 {
|
||||
if len(packSizeFromList) > 0 {
|
||||
Verbosef("reading pack files\n")
|
||||
bar := newProgressMax(!globalOptions.Quiet, uint64(len(packSizeFromList)), "packs")
|
||||
invalidFiles, err := repo.CreateIndexFromPacks(ctx, packSizeFromList, bar)
|
||||
bar.Done()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, id := range invalidFiles {
|
||||
Printf("skipped incomplete pack file: %v\n", id)
|
||||
Verboseff("skipped incomplete pack file: %v\n", id)
|
||||
}
|
||||
}
|
||||
|
||||
Verbosef("finding old index files\n")
|
||||
|
||||
var supersedes restic.IDs
|
||||
err = repo.List(ctx, restic.IndexFile, func(id restic.ID, size int64) error {
|
||||
supersedes = append(supersedes, id)
|
||||
return nil
|
||||
})
|
||||
err = rebuildIndexFiles(gopts, repo, removePacks, obsoleteIndexes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ids, err := idx.Save(ctx, repo, supersedes)
|
||||
if err != nil {
|
||||
return errors.Fatalf("unable to save index, last error was: %v", err)
|
||||
}
|
||||
|
||||
Verbosef("saved new indexes as %v\n", ids)
|
||||
|
||||
Verbosef("remove %d old index files\n", len(supersedes))
|
||||
err = DeleteFilesChecked(globalOptions, repo, restic.NewIDSet(supersedes...), restic.IndexFile)
|
||||
if err != nil {
|
||||
return errors.Fatalf("unable to remove an old index: %v\n", err)
|
||||
}
|
||||
Verbosef("done\n")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -117,7 +117,10 @@ func runRecover(gopts GlobalOptions) error {
|
||||
ModTime: time.Now(),
|
||||
ChangeTime: time.Now(),
|
||||
}
|
||||
tree.Insert(&node)
|
||||
err = tree.Insert(&node)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
treeID, err := repo.SaveTree(gopts.ctx, tree)
|
||||
|
||||
@@ -191,14 +191,26 @@ func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
|
||||
Verbosef("restoring %s to %s\n", res.Snapshot(), opts.Target)
|
||||
|
||||
err = res.RestoreTo(ctx, opts.Target)
|
||||
if err == nil && opts.Verify {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if totalErrors > 0 {
|
||||
return errors.Fatalf("There were %d errors\n", totalErrors)
|
||||
}
|
||||
|
||||
if opts.Verify {
|
||||
Verbosef("verifying files in %s\n", opts.Target)
|
||||
var count int
|
||||
count, err = res.VerifyFiles(ctx, opts.Target)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if totalErrors > 0 {
|
||||
return errors.Fatalf("There were %d errors\n", totalErrors)
|
||||
}
|
||||
Verbosef("finished verifying %d files in %s\n", count, opts.Target)
|
||||
}
|
||||
if totalErrors > 0 {
|
||||
Printf("There were %d errors\n", totalErrors)
|
||||
}
|
||||
return err
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -47,7 +47,7 @@ func init() {
|
||||
|
||||
f := cmdSnapshots.Flags()
|
||||
f.StringArrayVarP(&snapshotOptions.Hosts, "host", "H", nil, "only consider snapshots for this `host` (can be specified multiple times)")
|
||||
f.Var(&snapshotOptions.Tags, "tag", "only consider snapshots which include this `taglist` (can be specified multiple times)")
|
||||
f.Var(&snapshotOptions.Tags, "tag", "only consider snapshots which include this `taglist` in the format `tag[,tag,...]` (can be specified multiple times)")
|
||||
f.StringArrayVar(&snapshotOptions.Paths, "path", nil, "only consider snapshots for this `path` (can be specified multiple times)")
|
||||
f.BoolVarP(&snapshotOptions.Compact, "compact", "c", false, "use compact output format")
|
||||
f.BoolVar(&snapshotOptions.Last, "last", false, "only show the last snapshot for each host and path")
|
||||
@@ -243,7 +243,10 @@ func PrintSnapshots(stdout io.Writer, list restic.Snapshots, reasons []restic.Ke
|
||||
}
|
||||
}
|
||||
|
||||
tab.Write(stdout)
|
||||
err := tab.Write(stdout)
|
||||
if err != nil {
|
||||
Warnf("error printing: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
// PrintSnapshotGroupHeader prints which group of the group-by option the
|
||||
|
||||
@@ -11,7 +11,8 @@ import (
|
||||
func TestEmptySnapshotGroupJSON(t *testing.T) {
|
||||
for _, grouped := range []bool{false, true} {
|
||||
var w strings.Builder
|
||||
printSnapshotGroupJSON(&w, nil, grouped)
|
||||
err := printSnapshotGroupJSON(&w, nil, grouped)
|
||||
rtest.OK(t, err)
|
||||
|
||||
rtest.Equals(t, "[]", strings.TrimSpace(w.String()))
|
||||
}
|
||||
|
||||
@@ -166,7 +166,7 @@ func statsWalkSnapshot(ctx context.Context, snapshot *restic.Snapshot, repo rest
|
||||
if statsOptions.countMode == countModeRawData {
|
||||
// count just the sizes of unique blobs; we don't need to walk the tree
|
||||
// ourselves in this case, since a nifty function does it for us
|
||||
return restic.FindUsedBlobs(ctx, repo, *snapshot.Tree, stats.blobs)
|
||||
return restic.FindUsedBlobs(ctx, repo, restic.IDs{*snapshot.Tree}, stats.blobs, nil)
|
||||
}
|
||||
|
||||
err := walker.Walk(ctx, repo, *snapshot.Tree, restic.NewIDSet(), statsWalkTree(repo, stats))
|
||||
|
||||
@@ -38,9 +38,9 @@ type TagOptions struct {
|
||||
Hosts []string
|
||||
Paths []string
|
||||
Tags restic.TagLists
|
||||
SetTags []string
|
||||
AddTags []string
|
||||
RemoveTags []string
|
||||
SetTags restic.TagLists
|
||||
AddTags restic.TagLists
|
||||
RemoveTags restic.TagLists
|
||||
}
|
||||
|
||||
var tagOptions TagOptions
|
||||
@@ -49,9 +49,9 @@ func init() {
|
||||
cmdRoot.AddCommand(cmdTag)
|
||||
|
||||
tagFlags := cmdTag.Flags()
|
||||
tagFlags.StringSliceVar(&tagOptions.SetTags, "set", nil, "`tag` which will replace the existing tags (can be given multiple times)")
|
||||
tagFlags.StringSliceVar(&tagOptions.AddTags, "add", nil, "`tag` which will be added to the existing tags (can be given multiple times)")
|
||||
tagFlags.StringSliceVar(&tagOptions.RemoveTags, "remove", nil, "`tag` which will be removed from the existing tags (can be given multiple times)")
|
||||
tagFlags.Var(&tagOptions.SetTags, "set", "`tags` which will replace the existing tags in the format `tag[,tag,...]` (can be given multiple times)")
|
||||
tagFlags.Var(&tagOptions.AddTags, "add", "`tags` which will be added to the existing tags in the format `tag[,tag,...]` (can be given multiple times)")
|
||||
tagFlags.Var(&tagOptions.RemoveTags, "remove", "`tags` which will be removed from the existing tags in the format `tag[,tag,...]` (can be given multiple times)")
|
||||
|
||||
tagFlags.StringArrayVarP(&tagOptions.Hosts, "host", "H", nil, "only consider snapshots for this `host`, when no snapshot ID is given (can be specified multiple times)")
|
||||
tagFlags.Var(&tagOptions.Tags, "tag", "only consider snapshots which include this `taglist`, when no snapshot-ID is given")
|
||||
@@ -130,7 +130,7 @@ func runTag(opts TagOptions, gopts GlobalOptions, args []string) error {
|
||||
ctx, cancel := context.WithCancel(gopts.ctx)
|
||||
defer cancel()
|
||||
for sn := range FindFilteredSnapshots(ctx, repo, opts.Hosts, opts.Tags, opts.Paths, args) {
|
||||
changed, err := changeTags(ctx, repo, sn, opts.SetTags, opts.AddTags, opts.RemoveTags)
|
||||
changed, err := changeTags(ctx, repo, sn, opts.SetTags.Flatten(), opts.AddTags.Flatten(), opts.RemoveTags.Flatten())
|
||||
if err != nil {
|
||||
Warnf("unable to modify the tags for snapshot ID %q, ignoring: %v\n", sn.ID(), err)
|
||||
continue
|
||||
|
||||
@@ -9,7 +9,7 @@ import (
|
||||
// DeleteFiles deletes the given fileList of fileType in parallel
|
||||
// it will print a warning if there is an error, but continue deleting the remaining files
|
||||
func DeleteFiles(gopts GlobalOptions, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) {
|
||||
deleteFiles(gopts, true, repo, fileList, fileType)
|
||||
_ = deleteFiles(gopts, true, repo, fileList, fileType)
|
||||
}
|
||||
|
||||
// DeleteFilesChecked deletes the given fileList of fileType in parallel
|
||||
@@ -33,8 +33,8 @@ func deleteFiles(gopts GlobalOptions, ignoreError bool, repo restic.Repository,
|
||||
}()
|
||||
|
||||
bar := newProgressMax(!gopts.JSON && !gopts.Quiet, uint64(totalCount), "files deleted")
|
||||
defer bar.Done()
|
||||
wg, ctx := errgroup.WithContext(gopts.ctx)
|
||||
bar.Start()
|
||||
for i := 0; i < numDeleteWorkers; i++ {
|
||||
wg.Go(func() error {
|
||||
for id := range fileChan {
|
||||
@@ -51,12 +51,11 @@ func deleteFiles(gopts GlobalOptions, ignoreError bool, repo restic.Repository,
|
||||
if !gopts.JSON && gopts.verbosity > 2 {
|
||||
Verbosef("removed %v\n", h)
|
||||
}
|
||||
bar.Report(restic.Stat{Blobs: 1})
|
||||
bar.Add(1)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
err := wg.Wait()
|
||||
bar.Done()
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -180,7 +180,9 @@ func isDirExcludedByFile(dir, tagFilename, header string) bool {
|
||||
Warnf("could not open exclusion tagfile: %v", err)
|
||||
return false
|
||||
}
|
||||
defer f.Close()
|
||||
defer func() {
|
||||
_ = f.Close()
|
||||
}()
|
||||
buf := make([]byte, len(header))
|
||||
_, err = io.ReadFull(f, buf)
|
||||
// EOF is handled with a dedicated message, otherwise the warning were too cryptic
|
||||
@@ -199,12 +201,17 @@ func isDirExcludedByFile(dir, tagFilename, header string) bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// gatherDevices returns the set of unique device ids of the files and/or
|
||||
// directory paths listed in "items".
|
||||
func gatherDevices(items []string) (deviceMap map[string]uint64, err error) {
|
||||
deviceMap = make(map[string]uint64)
|
||||
for _, item := range items {
|
||||
item, err = filepath.Abs(filepath.Clean(item))
|
||||
// DeviceMap is used to track allowed source devices for backup. This is used to
|
||||
// check for crossing mount points during backup (for --one-file-system). It
|
||||
// maps the name of a source path to its device ID.
|
||||
type DeviceMap map[string]uint64
|
||||
|
||||
// NewDeviceMap creates a new device map from the list of source paths.
|
||||
func NewDeviceMap(allowedSourcePaths []string) (DeviceMap, error) {
|
||||
deviceMap := make(map[string]uint64)
|
||||
|
||||
for _, item := range allowedSourcePaths {
|
||||
item, err := filepath.Abs(filepath.Clean(item))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -213,30 +220,63 @@ func gatherDevices(items []string) (deviceMap map[string]uint64, err error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
id, err := fs.DeviceID(fi)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
deviceMap[item] = id
|
||||
}
|
||||
|
||||
if len(deviceMap) == 0 {
|
||||
return nil, errors.New("zero allowed devices")
|
||||
}
|
||||
|
||||
return deviceMap, nil
|
||||
}
|
||||
|
||||
// IsAllowed returns true if the path is located on an allowed device.
|
||||
func (m DeviceMap) IsAllowed(item string, deviceID uint64) (bool, error) {
|
||||
for dir := item; ; dir = filepath.Dir(dir) {
|
||||
debug.Log("item %v, test dir %v", item, dir)
|
||||
|
||||
// find a parent directory that is on an allowed device (otherwise
|
||||
// we would not traverse the directory at all)
|
||||
allowedID, ok := m[dir]
|
||||
if !ok {
|
||||
if dir == filepath.Dir(dir) {
|
||||
// arrived at root, no allowed device found. this should not happen.
|
||||
break
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// if the item has a different device ID than the parent directory,
|
||||
// we crossed a file system boundary
|
||||
if allowedID != deviceID {
|
||||
debug.Log("item %v (dir %v) on disallowed device %d", item, dir, deviceID)
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// item is on allowed device, accept it
|
||||
debug.Log("item %v allowed", item)
|
||||
return true, nil
|
||||
}
|
||||
|
||||
return false, fmt.Errorf("item %v (device ID %v) not found, deviceMap: %v", item, deviceID, m)
|
||||
}
|
||||
|
||||
// rejectByDevice returns a RejectFunc that rejects files which are on a
|
||||
// different file systems than the files/dirs in samples.
|
||||
func rejectByDevice(samples []string) (RejectFunc, error) {
|
||||
allowed, err := gatherDevices(samples)
|
||||
deviceMap, err := NewDeviceMap(samples)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
debug.Log("allowed devices: %v\n", allowed)
|
||||
debug.Log("allowed devices: %v\n", deviceMap)
|
||||
|
||||
return func(item string, fi os.FileInfo) bool {
|
||||
item = filepath.Clean(item)
|
||||
|
||||
id, err := fs.DeviceID(fi)
|
||||
if err != nil {
|
||||
// This should never happen because gatherDevices() would have
|
||||
@@ -244,26 +284,55 @@ func rejectByDevice(samples []string) (RejectFunc, error) {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
for dir := item; ; dir = filepath.Dir(dir) {
|
||||
debug.Log("item %v, test dir %v", item, dir)
|
||||
|
||||
allowedID, ok := allowed[dir]
|
||||
if !ok {
|
||||
if dir == filepath.Dir(dir) {
|
||||
break
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if allowedID != id {
|
||||
debug.Log("path %q on disallowed device %d", item, id)
|
||||
return true
|
||||
}
|
||||
allowed, err := deviceMap.IsAllowed(filepath.Clean(item), id)
|
||||
if err != nil {
|
||||
// this should not happen
|
||||
panic(fmt.Sprintf("error checking device ID of %v: %v", item, err))
|
||||
}
|
||||
|
||||
if allowed {
|
||||
// accept item
|
||||
return false
|
||||
}
|
||||
|
||||
panic(fmt.Sprintf("item %v, device id %v not found, allowedDevs: %v", item, id, allowed))
|
||||
// reject everything except directories
|
||||
if !fi.IsDir() {
|
||||
return true
|
||||
}
|
||||
|
||||
// special case: make sure we keep mountpoints (directories which
|
||||
// contain a mounted file system). Test this by checking if the parent
|
||||
// directory would be included.
|
||||
parentDir := filepath.Dir(filepath.Clean(item))
|
||||
|
||||
parentFI, err := fs.Lstat(parentDir)
|
||||
if err != nil {
|
||||
debug.Log("item %v: error running lstat() on parent directory: %v", item, err)
|
||||
// if in doubt, reject
|
||||
return true
|
||||
}
|
||||
|
||||
parentDeviceID, err := fs.DeviceID(parentFI)
|
||||
if err != nil {
|
||||
debug.Log("item %v: getting device ID of parent directory: %v", item, err)
|
||||
// if in doubt, reject
|
||||
return true
|
||||
}
|
||||
|
||||
parentAllowed, err := deviceMap.IsAllowed(parentDir, parentDeviceID)
|
||||
if err != nil {
|
||||
debug.Log("item %v: error checking parent directory: %v", item, err)
|
||||
// if in doubt, reject
|
||||
return true
|
||||
}
|
||||
|
||||
if parentAllowed {
|
||||
// we found a mount point, so accept the directory
|
||||
return false
|
||||
}
|
||||
|
||||
// reject everything else
|
||||
return true
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -318,3 +318,47 @@ func TestIsExcludedByFileSize(t *testing.T) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeviceMap(t *testing.T) {
|
||||
deviceMap := DeviceMap{
|
||||
filepath.FromSlash("/"): 1,
|
||||
filepath.FromSlash("/usr/local"): 5,
|
||||
}
|
||||
|
||||
var tests = []struct {
|
||||
item string
|
||||
deviceID uint64
|
||||
allowed bool
|
||||
}{
|
||||
{"/root", 1, true},
|
||||
{"/usr", 1, true},
|
||||
|
||||
{"/proc", 2, false},
|
||||
{"/proc/1234", 2, false},
|
||||
|
||||
{"/usr", 3, false},
|
||||
{"/usr/share", 3, false},
|
||||
|
||||
{"/usr/local", 5, true},
|
||||
{"/usr/local/foobar", 5, true},
|
||||
|
||||
{"/usr/local/foobar/submount", 23, false},
|
||||
{"/usr/local/foobar/submount/file", 23, false},
|
||||
|
||||
{"/usr/local/foobar/outhersubmount", 1, false},
|
||||
{"/usr/local/foobar/outhersubmount/otherfile", 1, false},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run("", func(t *testing.T) {
|
||||
res, err := deviceMap.IsAllowed(filepath.FromSlash(test.item), test.deviceID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if res != test.allowed {
|
||||
t.Fatalf("wrong result returned by IsAllowed(%v): want %v, got %v", test.item, test.allowed, res)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -39,7 +39,7 @@ import (
|
||||
"golang.org/x/crypto/ssh/terminal"
|
||||
)
|
||||
|
||||
var version = "0.11.0"
|
||||
var version = "0.12.0"
|
||||
|
||||
// TimeFormat is the format used for all timestamps printed by restic.
|
||||
const TimeFormat = "2006-01-02 15:04:05"
|
||||
@@ -71,7 +71,7 @@ type GlobalOptions struct {
|
||||
stdout io.Writer
|
||||
stderr io.Writer
|
||||
|
||||
backendTestHook backendWrapper
|
||||
backendTestHook, backendInnerTestHook backendWrapper
|
||||
|
||||
// verbosity is set as follows:
|
||||
// 0 means: don't print any messages except errors, this is used when --quiet is specified
|
||||
@@ -231,6 +231,13 @@ func Verbosef(format string, args ...interface{}) {
|
||||
}
|
||||
}
|
||||
|
||||
// Verboseff calls Printf to write the message when the verbosity is >= 2
|
||||
func Verboseff(format string, args ...interface{}) {
|
||||
if globalOptions.verbosity >= 2 {
|
||||
Printf(format, args...)
|
||||
}
|
||||
}
|
||||
|
||||
// PrintProgress wraps fmt.Printf to handle the difference in writing progress
|
||||
// information to terminals and non-terminal stdout
|
||||
func PrintProgress(format string, args ...interface{}) {
|
||||
@@ -688,12 +695,8 @@ func open(s string, gopts GlobalOptions, opts options.Options) (restic.Backend,
|
||||
switch loc.Scheme {
|
||||
case "local":
|
||||
be, err = local.Open(globalOptions.ctx, cfg.(local.Config))
|
||||
// wrap the backend in a LimitBackend so that the throughput is limited
|
||||
be = limiter.LimitBackend(be, lim)
|
||||
case "sftp":
|
||||
be, err = sftp.Open(globalOptions.ctx, cfg.(sftp.Config))
|
||||
// wrap the backend in a LimitBackend so that the throughput is limited
|
||||
be = limiter.LimitBackend(be, lim)
|
||||
case "s3":
|
||||
be, err = s3.Open(globalOptions.ctx, cfg.(s3.Config), rt)
|
||||
case "gs":
|
||||
@@ -717,6 +720,19 @@ func open(s string, gopts GlobalOptions, opts options.Options) (restic.Backend,
|
||||
return nil, errors.Fatalf("unable to open repo at %v: %v", location.StripPassword(s), err)
|
||||
}
|
||||
|
||||
// wrap backend if a test specified an inner hook
|
||||
if gopts.backendInnerTestHook != nil {
|
||||
be, err = gopts.backendInnerTestHook(be)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if loc.Scheme == "local" || loc.Scheme == "sftp" {
|
||||
// wrap the backend in a LimitBackend so that the throughput is limited
|
||||
be = limiter.LimitBackend(be, lim)
|
||||
}
|
||||
|
||||
// check if config is there
|
||||
fi, err := be.Stat(globalOptions.ctx, restic.Handle{Type: restic.ConfigFile})
|
||||
if err != nil {
|
||||
|
||||
@@ -54,7 +54,7 @@ func TestReadRepo(t *testing.T) {
|
||||
|
||||
var opts3 GlobalOptions
|
||||
opts3.RepositoryFile = foo + "-invalid"
|
||||
repo, err = ReadRepo(opts3)
|
||||
_, err = ReadRepo(opts3)
|
||||
if err == nil {
|
||||
t.Fatal("must not read repository path from invalid file path")
|
||||
}
|
||||
|
||||
@@ -1,7 +1,4 @@
|
||||
// +build !netbsd
|
||||
// +build !openbsd
|
||||
// +build !solaris
|
||||
// +build !windows
|
||||
// +build darwin freebsd linux
|
||||
|
||||
package main
|
||||
|
||||
@@ -163,9 +160,6 @@ func TestMount(t *testing.T) {
|
||||
repo, err := OpenRepository(env.gopts)
|
||||
rtest.OK(t, err)
|
||||
|
||||
// We remove the mountpoint now to check that cmdMount creates it
|
||||
rtest.RemoveAll(t, env.mountpoint)
|
||||
|
||||
checkSnapshots(t, env.gopts, repo, env.mountpoint, env.repo, []restic.ID{}, 0)
|
||||
|
||||
rtest.SetupTarTestFixture(t, env.testdata, filepath.Join("testdata", "backup-data.tar.gz"))
|
||||
|
||||
@@ -166,7 +166,7 @@ func testRunDiffOutput(gopts GlobalOptions, firstSnapshotID string, secondSnapsh
|
||||
ShowMetadata: false,
|
||||
}
|
||||
err := runDiff(opts, gopts, []string{firstSnapshotID, secondSnapshotID})
|
||||
return string(buf.Bytes()), err
|
||||
return buf.String(), err
|
||||
}
|
||||
|
||||
func testRunRebuildIndex(t testing.TB, gopts GlobalOptions) {
|
||||
@@ -175,7 +175,7 @@ func testRunRebuildIndex(t testing.TB, gopts GlobalOptions) {
|
||||
globalOptions.stdout = os.Stdout
|
||||
}()
|
||||
|
||||
rtest.OK(t, runRebuildIndex(gopts))
|
||||
rtest.OK(t, runRebuildIndex(RebuildIndexOptions{}, gopts))
|
||||
}
|
||||
|
||||
func testRunLs(t testing.TB, gopts GlobalOptions, snapshotID string) []string {
|
||||
@@ -270,8 +270,8 @@ func testRunForgetJSON(t testing.TB, gopts GlobalOptions, args ...string) {
|
||||
"Expected 2 snapshots to be removed, got %v", len(forgets[0].Remove))
|
||||
}
|
||||
|
||||
func testRunPrune(t testing.TB, gopts GlobalOptions) {
|
||||
rtest.OK(t, runPrune(gopts))
|
||||
func testRunPrune(t testing.TB, gopts GlobalOptions, opts PruneOptions) {
|
||||
rtest.OK(t, runPrune(opts, gopts))
|
||||
}
|
||||
|
||||
func testSetupBackupData(t testing.TB, env *testEnvironment) string {
|
||||
@@ -566,9 +566,9 @@ func TestBackupErrors(t *testing.T) {
|
||||
|
||||
// Assume failure
|
||||
inaccessibleFile := filepath.Join(env.testdata, "0", "0", "9", "0")
|
||||
os.Chmod(inaccessibleFile, 0000)
|
||||
rtest.OK(t, os.Chmod(inaccessibleFile, 0000))
|
||||
defer func() {
|
||||
os.Chmod(inaccessibleFile, 0644)
|
||||
rtest.OK(t, os.Chmod(inaccessibleFile, 0644))
|
||||
}()
|
||||
opts := BackupOptions{}
|
||||
gopts := env.gopts
|
||||
@@ -657,16 +657,24 @@ func TestBackupTags(t *testing.T) {
|
||||
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
|
||||
testRunCheck(t, env.gopts)
|
||||
newest, _ := testRunSnapshots(t, env.gopts)
|
||||
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
|
||||
|
||||
if newest == nil {
|
||||
t.Fatal("expected a backup, got nil")
|
||||
}
|
||||
|
||||
rtest.Assert(t, len(newest.Tags) == 0,
|
||||
"expected no tags, got %v", newest.Tags)
|
||||
parent := newest
|
||||
|
||||
opts.Tags = []string{"NL"}
|
||||
opts.Tags = restic.TagLists{[]string{"NL"}}
|
||||
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
|
||||
testRunCheck(t, env.gopts)
|
||||
newest, _ = testRunSnapshots(t, env.gopts)
|
||||
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
|
||||
|
||||
if newest == nil {
|
||||
t.Fatal("expected a backup, got nil")
|
||||
}
|
||||
|
||||
rtest.Assert(t, len(newest.Tags) == 1 && newest.Tags[0] == "NL",
|
||||
"expected one NL tag, got %v", newest.Tags)
|
||||
// Tagged backup should have untagged backup as parent.
|
||||
@@ -833,48 +841,59 @@ func TestTag(t *testing.T) {
|
||||
testRunBackup(t, "", []string{env.testdata}, BackupOptions{}, env.gopts)
|
||||
testRunCheck(t, env.gopts)
|
||||
newest, _ := testRunSnapshots(t, env.gopts)
|
||||
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
|
||||
if newest == nil {
|
||||
t.Fatal("expected a new backup, got nil")
|
||||
}
|
||||
|
||||
rtest.Assert(t, len(newest.Tags) == 0,
|
||||
"expected no tags, got %v", newest.Tags)
|
||||
rtest.Assert(t, newest.Original == nil,
|
||||
"expected original ID to be nil, got %v", newest.Original)
|
||||
originalID := *newest.ID
|
||||
|
||||
testRunTag(t, TagOptions{SetTags: []string{"NL"}}, env.gopts)
|
||||
testRunTag(t, TagOptions{SetTags: restic.TagLists{[]string{"NL"}}}, env.gopts)
|
||||
testRunCheck(t, env.gopts)
|
||||
newest, _ = testRunSnapshots(t, env.gopts)
|
||||
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
|
||||
if newest == nil {
|
||||
t.Fatal("expected a backup, got nil")
|
||||
}
|
||||
rtest.Assert(t, len(newest.Tags) == 1 && newest.Tags[0] == "NL",
|
||||
"set failed, expected one NL tag, got %v", newest.Tags)
|
||||
rtest.Assert(t, newest.Original != nil, "expected original snapshot id, got nil")
|
||||
rtest.Assert(t, *newest.Original == originalID,
|
||||
"expected original ID to be set to the first snapshot id")
|
||||
|
||||
testRunTag(t, TagOptions{AddTags: []string{"CH"}}, env.gopts)
|
||||
testRunTag(t, TagOptions{AddTags: restic.TagLists{[]string{"CH"}}}, env.gopts)
|
||||
testRunCheck(t, env.gopts)
|
||||
newest, _ = testRunSnapshots(t, env.gopts)
|
||||
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
|
||||
if newest == nil {
|
||||
t.Fatal("expected a backup, got nil")
|
||||
}
|
||||
rtest.Assert(t, len(newest.Tags) == 2 && newest.Tags[0] == "NL" && newest.Tags[1] == "CH",
|
||||
"add failed, expected CH,NL tags, got %v", newest.Tags)
|
||||
rtest.Assert(t, newest.Original != nil, "expected original snapshot id, got nil")
|
||||
rtest.Assert(t, *newest.Original == originalID,
|
||||
"expected original ID to be set to the first snapshot id")
|
||||
|
||||
testRunTag(t, TagOptions{RemoveTags: []string{"NL"}}, env.gopts)
|
||||
testRunTag(t, TagOptions{RemoveTags: restic.TagLists{[]string{"NL"}}}, env.gopts)
|
||||
testRunCheck(t, env.gopts)
|
||||
newest, _ = testRunSnapshots(t, env.gopts)
|
||||
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
|
||||
if newest == nil {
|
||||
t.Fatal("expected a backup, got nil")
|
||||
}
|
||||
rtest.Assert(t, len(newest.Tags) == 1 && newest.Tags[0] == "CH",
|
||||
"remove failed, expected one CH tag, got %v", newest.Tags)
|
||||
rtest.Assert(t, newest.Original != nil, "expected original snapshot id, got nil")
|
||||
rtest.Assert(t, *newest.Original == originalID,
|
||||
"expected original ID to be set to the first snapshot id")
|
||||
|
||||
testRunTag(t, TagOptions{AddTags: []string{"US", "RU"}}, env.gopts)
|
||||
testRunTag(t, TagOptions{RemoveTags: []string{"CH", "US", "RU"}}, env.gopts)
|
||||
testRunTag(t, TagOptions{AddTags: restic.TagLists{[]string{"US", "RU"}}}, env.gopts)
|
||||
testRunTag(t, TagOptions{RemoveTags: restic.TagLists{[]string{"CH", "US", "RU"}}}, env.gopts)
|
||||
testRunCheck(t, env.gopts)
|
||||
newest, _ = testRunSnapshots(t, env.gopts)
|
||||
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
|
||||
if newest == nil {
|
||||
t.Fatal("expected a backup, got nil")
|
||||
}
|
||||
rtest.Assert(t, len(newest.Tags) == 0,
|
||||
"expected no tags, got %v", newest.Tags)
|
||||
rtest.Assert(t, newest.Original != nil, "expected original snapshot id, got nil")
|
||||
@@ -882,10 +901,12 @@ func TestTag(t *testing.T) {
|
||||
"expected original ID to be set to the first snapshot id")
|
||||
|
||||
// Check special case of removing all tags.
|
||||
testRunTag(t, TagOptions{SetTags: []string{""}}, env.gopts)
|
||||
testRunTag(t, TagOptions{SetTags: restic.TagLists{[]string{""}}}, env.gopts)
|
||||
testRunCheck(t, env.gopts)
|
||||
newest, _ = testRunSnapshots(t, env.gopts)
|
||||
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
|
||||
if newest == nil {
|
||||
t.Fatal("expected a backup, got nil")
|
||||
}
|
||||
rtest.Assert(t, len(newest.Tags) == 0,
|
||||
"expected no tags, got %v", newest.Tags)
|
||||
rtest.Assert(t, newest.Original != nil, "expected original snapshot id, got nil")
|
||||
@@ -933,7 +954,7 @@ func testRunKeyAddNewKeyUserHost(t testing.TB, gopts GlobalOptions) {
|
||||
keyHostname = ""
|
||||
}()
|
||||
|
||||
cmdKey.Flags().Parse([]string{"--user=john", "--host=example.com"})
|
||||
rtest.OK(t, cmdKey.Flags().Parse([]string{"--user=john", "--host=example.com"}))
|
||||
|
||||
t.Log("adding key for john@example.com")
|
||||
rtest.OK(t, runKey(gopts, []string{"add"}))
|
||||
@@ -1106,7 +1127,7 @@ func TestRestoreLatest(t *testing.T) {
|
||||
testRunBackup(t, "", []string{filepath.Base(env.testdata)}, opts, env.gopts)
|
||||
testRunCheck(t, env.gopts)
|
||||
|
||||
os.Remove(p)
|
||||
rtest.OK(t, os.Remove(p))
|
||||
rtest.OK(t, appendRandomData(p, 101))
|
||||
testRunBackup(t, "", []string{filepath.Base(env.testdata)}, opts, env.gopts)
|
||||
testRunCheck(t, env.gopts)
|
||||
@@ -1351,7 +1372,7 @@ func TestRebuildIndexFailsOnAppendOnly(t *testing.T) {
|
||||
env.gopts.backendTestHook = func(r restic.Backend) (restic.Backend, error) {
|
||||
return &appendOnlyBackend{r}, nil
|
||||
}
|
||||
err := runRebuildIndex(env.gopts)
|
||||
err := runRebuildIndex(RebuildIndexOptions{}, env.gopts)
|
||||
if err == nil {
|
||||
t.Error("expected rebuildIndex to fail")
|
||||
}
|
||||
@@ -1386,6 +1407,32 @@ func TestCheckRestoreNoLock(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestPrune(t *testing.T) {
|
||||
t.Run("0", func(t *testing.T) {
|
||||
opts := PruneOptions{MaxUnused: "0%"}
|
||||
checkOpts := CheckOptions{ReadData: true, CheckUnused: true}
|
||||
testPrune(t, opts, checkOpts)
|
||||
})
|
||||
|
||||
t.Run("50", func(t *testing.T) {
|
||||
opts := PruneOptions{MaxUnused: "50%"}
|
||||
checkOpts := CheckOptions{ReadData: true}
|
||||
testPrune(t, opts, checkOpts)
|
||||
})
|
||||
|
||||
t.Run("unlimited", func(t *testing.T) {
|
||||
opts := PruneOptions{MaxUnused: "unlimited"}
|
||||
checkOpts := CheckOptions{ReadData: true}
|
||||
testPrune(t, opts, checkOpts)
|
||||
})
|
||||
|
||||
t.Run("CachableOnly", func(t *testing.T) {
|
||||
opts := PruneOptions{MaxUnused: "5%", RepackCachableOnly: true}
|
||||
checkOpts := CheckOptions{ReadData: true}
|
||||
testPrune(t, opts, checkOpts)
|
||||
})
|
||||
}
|
||||
|
||||
func testPrune(t *testing.T, pruneOpts PruneOptions, checkOpts CheckOptions) {
|
||||
env, cleanup := withTestEnvironment(t)
|
||||
defer cleanup()
|
||||
|
||||
@@ -1406,10 +1453,12 @@ func TestPrune(t *testing.T) {
|
||||
|
||||
testRunForgetJSON(t, env.gopts)
|
||||
testRunForget(t, env.gopts, firstSnapshot[0].String())
|
||||
testRunPrune(t, env.gopts)
|
||||
testRunCheck(t, env.gopts)
|
||||
testRunPrune(t, env.gopts, pruneOpts)
|
||||
rtest.OK(t, runCheck(checkOpts, env.gopts, nil))
|
||||
}
|
||||
|
||||
var pruneDefaultOptions = PruneOptions{MaxUnused: "5%"}
|
||||
|
||||
func listPacks(gopts GlobalOptions, t *testing.T) restic.IDSet {
|
||||
r, err := OpenRepository(gopts)
|
||||
rtest.OK(t, err)
|
||||
@@ -1452,14 +1501,8 @@ func TestPruneWithDamagedRepository(t *testing.T) {
|
||||
"expected one snapshot, got %v", snapshotIDs)
|
||||
|
||||
// prune should fail
|
||||
err := runPrune(env.gopts)
|
||||
if err == nil {
|
||||
t.Fatalf("expected prune to fail")
|
||||
}
|
||||
if !strings.Contains(err.Error(), "blobs seem to be missing") {
|
||||
t.Fatalf("did not find hint for missing blobs")
|
||||
}
|
||||
t.Log(err)
|
||||
rtest.Assert(t, runPrune(pruneDefaultOptions, env.gopts) == errorPacksMissing,
|
||||
"prune should have reported index not complete error")
|
||||
}
|
||||
|
||||
// Test repos for edge cases
|
||||
@@ -1469,37 +1512,43 @@ func TestEdgeCaseRepos(t *testing.T) {
|
||||
// repo where index is completely missing
|
||||
// => check and prune should fail
|
||||
t.Run("no-index", func(t *testing.T) {
|
||||
testEdgeCaseRepo(t, "repo-index-missing.tar.gz", opts, false, false)
|
||||
testEdgeCaseRepo(t, "repo-index-missing.tar.gz", opts, pruneDefaultOptions, false, false)
|
||||
})
|
||||
|
||||
// repo where an existing and used blob is missing from the index
|
||||
// => check should fail, prune should heal this
|
||||
// => check and prune should fail
|
||||
t.Run("index-missing-blob", func(t *testing.T) {
|
||||
testEdgeCaseRepo(t, "repo-index-missing-blob.tar.gz", opts, false, true)
|
||||
testEdgeCaseRepo(t, "repo-index-missing-blob.tar.gz", opts, pruneDefaultOptions, false, false)
|
||||
})
|
||||
|
||||
// repo where a blob is missing
|
||||
// => check and prune should fail
|
||||
t.Run("no-data", func(t *testing.T) {
|
||||
testEdgeCaseRepo(t, "repo-data-missing.tar.gz", opts, false, false)
|
||||
t.Run("missing-data", func(t *testing.T) {
|
||||
testEdgeCaseRepo(t, "repo-data-missing.tar.gz", opts, pruneDefaultOptions, false, false)
|
||||
})
|
||||
|
||||
// repo where blobs which are not needed are missing or in invalid pack files
|
||||
// => check should fail and prune should repair this
|
||||
t.Run("missing-unused-data", func(t *testing.T) {
|
||||
testEdgeCaseRepo(t, "repo-unused-data-missing.tar.gz", opts, pruneDefaultOptions, false, true)
|
||||
})
|
||||
|
||||
// repo where data exists that is not referenced
|
||||
// => check and prune should fully work
|
||||
t.Run("unreferenced-data", func(t *testing.T) {
|
||||
testEdgeCaseRepo(t, "repo-unreferenced-data.tar.gz", opts, true, true)
|
||||
testEdgeCaseRepo(t, "repo-unreferenced-data.tar.gz", opts, pruneDefaultOptions, true, true)
|
||||
})
|
||||
|
||||
// repo where an obsolete index still exists
|
||||
// => check and prune should fully work
|
||||
t.Run("obsolete-index", func(t *testing.T) {
|
||||
testEdgeCaseRepo(t, "repo-obsolete-index.tar.gz", opts, true, true)
|
||||
testEdgeCaseRepo(t, "repo-obsolete-index.tar.gz", opts, pruneDefaultOptions, true, true)
|
||||
})
|
||||
|
||||
// repo which contains mixed (data/tree) packs
|
||||
// => check and prune should fully work
|
||||
t.Run("mixed-packs", func(t *testing.T) {
|
||||
testEdgeCaseRepo(t, "repo-mixed.tar.gz", opts, true, true)
|
||||
testEdgeCaseRepo(t, "repo-mixed.tar.gz", opts, pruneDefaultOptions, true, true)
|
||||
})
|
||||
|
||||
// repo which contains duplicate blobs
|
||||
@@ -1510,11 +1559,11 @@ func TestEdgeCaseRepos(t *testing.T) {
|
||||
CheckUnused: true,
|
||||
}
|
||||
t.Run("duplicates", func(t *testing.T) {
|
||||
testEdgeCaseRepo(t, "repo-duplicates.tar.gz", opts, false, true)
|
||||
testEdgeCaseRepo(t, "repo-duplicates.tar.gz", opts, pruneDefaultOptions, false, true)
|
||||
})
|
||||
}
|
||||
|
||||
func testEdgeCaseRepo(t *testing.T, tarfile string, options CheckOptions, checkOK, pruneOK bool) {
|
||||
func testEdgeCaseRepo(t *testing.T, tarfile string, optionsCheck CheckOptions, optionsPrune PruneOptions, checkOK, pruneOK bool) {
|
||||
env, cleanup := withTestEnvironment(t)
|
||||
defer cleanup()
|
||||
|
||||
@@ -1524,19 +1573,78 @@ func testEdgeCaseRepo(t *testing.T, tarfile string, options CheckOptions, checkO
|
||||
if checkOK {
|
||||
testRunCheck(t, env.gopts)
|
||||
} else {
|
||||
rtest.Assert(t, runCheck(options, env.gopts, nil) != nil,
|
||||
rtest.Assert(t, runCheck(optionsCheck, env.gopts, nil) != nil,
|
||||
"check should have reported an error")
|
||||
}
|
||||
|
||||
if pruneOK {
|
||||
testRunPrune(t, env.gopts)
|
||||
testRunPrune(t, env.gopts, optionsPrune)
|
||||
testRunCheck(t, env.gopts)
|
||||
} else {
|
||||
rtest.Assert(t, runPrune(env.gopts) != nil,
|
||||
rtest.Assert(t, runPrune(optionsPrune, env.gopts) != nil,
|
||||
"prune should have reported an error")
|
||||
}
|
||||
}
|
||||
|
||||
// a listOnceBackend only allows listing once per filetype
|
||||
// listing filetypes more than once may cause problems with eventually consistent
|
||||
// backends (like e.g. AWS S3) as the second listing may be inconsistent to what
|
||||
// is expected by the first listing + some operations.
|
||||
type listOnceBackend struct {
|
||||
restic.Backend
|
||||
listedFileType map[restic.FileType]bool
|
||||
}
|
||||
|
||||
func newListOnceBackend(be restic.Backend) *listOnceBackend {
|
||||
return &listOnceBackend{
|
||||
Backend: be,
|
||||
listedFileType: make(map[restic.FileType]bool),
|
||||
}
|
||||
}
|
||||
|
||||
func (be *listOnceBackend) List(ctx context.Context, t restic.FileType, fn func(restic.FileInfo) error) error {
|
||||
if t != restic.LockFile && be.listedFileType[t] {
|
||||
return errors.Errorf("tried listing type %v the second time", t)
|
||||
}
|
||||
be.listedFileType[t] = true
|
||||
return be.Backend.List(ctx, t, fn)
|
||||
}
|
||||
|
||||
func TestListOnce(t *testing.T) {
|
||||
env, cleanup := withTestEnvironment(t)
|
||||
defer cleanup()
|
||||
|
||||
env.gopts.backendTestHook = func(r restic.Backend) (restic.Backend, error) {
|
||||
return newListOnceBackend(r), nil
|
||||
}
|
||||
|
||||
pruneOpts := PruneOptions{MaxUnused: "0"}
|
||||
checkOpts := CheckOptions{ReadData: true, CheckUnused: true}
|
||||
|
||||
testSetupBackupData(t, env)
|
||||
opts := BackupOptions{}
|
||||
|
||||
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9")}, opts, env.gopts)
|
||||
firstSnapshot := testRunList(t, "snapshots", env.gopts)
|
||||
rtest.Assert(t, len(firstSnapshot) == 1,
|
||||
"expected one snapshot, got %v", firstSnapshot)
|
||||
|
||||
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9", "2")}, opts, env.gopts)
|
||||
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9", "3")}, opts, env.gopts)
|
||||
|
||||
snapshotIDs := testRunList(t, "snapshots", env.gopts)
|
||||
rtest.Assert(t, len(snapshotIDs) == 3,
|
||||
"expected 3 snapshot, got %v", snapshotIDs)
|
||||
|
||||
testRunForgetJSON(t, env.gopts)
|
||||
testRunForget(t, env.gopts, firstSnapshot[0].String())
|
||||
testRunPrune(t, env.gopts, pruneOpts)
|
||||
rtest.OK(t, runCheck(checkOpts, env.gopts, nil))
|
||||
|
||||
rtest.OK(t, runRebuildIndex(RebuildIndexOptions{}, env.gopts))
|
||||
rtest.OK(t, runRebuildIndex(RebuildIndexOptions{ReadAllPacks: true}, env.gopts))
|
||||
}
|
||||
|
||||
func TestHardLink(t *testing.T) {
|
||||
// this test assumes a test set with a single directory containing hard linked files
|
||||
env, cleanup := withTestEnvironment(t)
|
||||
@@ -1660,16 +1768,35 @@ func copyFile(dst string, src string) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer srcFile.Close()
|
||||
|
||||
dstFile, err := os.Create(dst)
|
||||
if err != nil {
|
||||
// ignore subsequent errors
|
||||
_ = srcFile.Close()
|
||||
return err
|
||||
}
|
||||
defer dstFile.Close()
|
||||
|
||||
_, err = io.Copy(dstFile, srcFile)
|
||||
return err
|
||||
if err != nil {
|
||||
// ignore subsequent errors
|
||||
_ = srcFile.Close()
|
||||
_ = dstFile.Close()
|
||||
return err
|
||||
}
|
||||
|
||||
err = srcFile.Close()
|
||||
if err != nil {
|
||||
// ignore subsequent errors
|
||||
_ = dstFile.Close()
|
||||
return err
|
||||
}
|
||||
|
||||
err = dstFile.Close()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
var diffOutputRegexPatterns = []string{
|
||||
@@ -1742,3 +1869,53 @@ func TestDiff(t *testing.T) {
|
||||
rtest.Assert(t, r.MatchString(out), "expected pattern %v in output, got\n%v", pattern, out)
|
||||
}
|
||||
}
|
||||
|
||||
type writeToOnly struct {
|
||||
rd io.Reader
|
||||
}
|
||||
|
||||
func (r *writeToOnly) Read(p []byte) (n int, err error) {
|
||||
return 0, fmt.Errorf("should have called WriteTo instead")
|
||||
}
|
||||
|
||||
func (r *writeToOnly) WriteTo(w io.Writer) (int64, error) {
|
||||
return io.Copy(w, r.rd)
|
||||
}
|
||||
|
||||
type onlyLoadWithWriteToBackend struct {
|
||||
restic.Backend
|
||||
}
|
||||
|
||||
func (be *onlyLoadWithWriteToBackend) Load(ctx context.Context, h restic.Handle,
|
||||
length int, offset int64, fn func(rd io.Reader) error) error {
|
||||
|
||||
return be.Backend.Load(ctx, h, length, offset, func(rd io.Reader) error {
|
||||
return fn(&writeToOnly{rd: rd})
|
||||
})
|
||||
}
|
||||
|
||||
func TestBackendLoadWriteTo(t *testing.T) {
|
||||
env, cleanup := withTestEnvironment(t)
|
||||
defer cleanup()
|
||||
|
||||
// setup backend which only works if it's WriteTo method is correctly propagated upwards
|
||||
env.gopts.backendInnerTestHook = func(r restic.Backend) (restic.Backend, error) {
|
||||
return &onlyLoadWithWriteToBackend{Backend: r}, nil
|
||||
}
|
||||
|
||||
testSetupBackupData(t, env)
|
||||
|
||||
// add some data, but make sure that it isn't cached during upload
|
||||
opts := BackupOptions{}
|
||||
env.gopts.NoCache = true
|
||||
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9")}, opts, env.gopts)
|
||||
|
||||
// loading snapshots must still work
|
||||
env.gopts.NoCache = false
|
||||
firstSnapshot := testRunList(t, "snapshots", env.gopts)
|
||||
rtest.Assert(t, len(firstSnapshot) == 1,
|
||||
"expected one snapshot, got %v", firstSnapshot)
|
||||
|
||||
// test readData using the hashing.Reader
|
||||
testRunCheck(t, env.gopts)
|
||||
}
|
||||
|
||||
@@ -85,9 +85,9 @@ func refreshLocks(wg *sync.WaitGroup, done <-chan struct{}) {
|
||||
}
|
||||
}
|
||||
|
||||
func unlockRepo(lock *restic.Lock) error {
|
||||
func unlockRepo(lock *restic.Lock) {
|
||||
if lock == nil {
|
||||
return nil
|
||||
return
|
||||
}
|
||||
|
||||
globalLocks.Lock()
|
||||
@@ -99,18 +99,17 @@ func unlockRepo(lock *restic.Lock) error {
|
||||
debug.Log("unlocking repository with lock %v", lock)
|
||||
if err := lock.Unlock(); err != nil {
|
||||
debug.Log("error while unlocking: %v", err)
|
||||
return err
|
||||
Warnf("error while unlocking: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// remove the lock from the list of locks
|
||||
globalLocks.locks = append(globalLocks.locks[:i], globalLocks.locks[i+1:]...)
|
||||
return nil
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
debug.Log("unable to find lock %v in the global list of locks, ignoring", lock)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func unlockAll() error {
|
||||
|
||||
@@ -32,7 +32,7 @@ directories in an encrypted repository stored on different backends.
|
||||
PersistentPreRunE: func(c *cobra.Command, args []string) error {
|
||||
// set verbosity, default is one
|
||||
globalOptions.verbosity = 1
|
||||
if globalOptions.Quiet && (globalOptions.Verbose > 1) {
|
||||
if globalOptions.Quiet && globalOptions.Verbose > 0 {
|
||||
return errors.Fatal("--quiet and --verbose cannot be specified at the same time")
|
||||
}
|
||||
|
||||
|
||||
@@ -2,35 +2,53 @@ package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"github.com/restic/restic/internal/ui/progress"
|
||||
)
|
||||
|
||||
// newProgressMax returns a progress that counts blobs.
|
||||
func newProgressMax(show bool, max uint64, description string) *restic.Progress {
|
||||
// calculateProgressInterval returns the interval configured via RESTIC_PROGRESS_FPS
|
||||
// or if unset returns an interval for 60fps on interactive terminals and 0 (=disabled)
|
||||
// for non-interactive terminals
|
||||
func calculateProgressInterval() time.Duration {
|
||||
interval := time.Second / 60
|
||||
fps, err := strconv.ParseFloat(os.Getenv("RESTIC_PROGRESS_FPS"), 64)
|
||||
if err == nil && fps > 0 {
|
||||
if fps > 60 {
|
||||
fps = 60
|
||||
}
|
||||
interval = time.Duration(float64(time.Second) / fps)
|
||||
} else if !stdoutIsTerminal() {
|
||||
interval = 0
|
||||
}
|
||||
return interval
|
||||
}
|
||||
|
||||
// newProgressMax returns a progress.Counter that prints to stdout.
|
||||
func newProgressMax(show bool, max uint64, description string) *progress.Counter {
|
||||
if !show {
|
||||
return nil
|
||||
}
|
||||
interval := calculateProgressInterval()
|
||||
|
||||
p := restic.NewProgress()
|
||||
|
||||
p.OnUpdate = func(s restic.Stat, d time.Duration, ticker bool) {
|
||||
status := fmt.Sprintf("[%s] %s %d / %d %s",
|
||||
formatDuration(d),
|
||||
formatPercent(s.Blobs, max),
|
||||
s.Blobs, max, description)
|
||||
return progress.New(interval, max, func(v uint64, max uint64, d time.Duration, final bool) {
|
||||
var status string
|
||||
if max == 0 {
|
||||
status = fmt.Sprintf("[%s] %d %s", formatDuration(d), v, description)
|
||||
} else {
|
||||
status = fmt.Sprintf("[%s] %s %d / %d %s",
|
||||
formatDuration(d), formatPercent(v, max), v, max, description)
|
||||
}
|
||||
|
||||
if w := stdoutTerminalWidth(); w > 0 {
|
||||
status = shortenStatus(w, status)
|
||||
}
|
||||
|
||||
PrintProgress("%s", status)
|
||||
}
|
||||
|
||||
p.OnDone = func(s restic.Stat, d time.Duration, ticker bool) {
|
||||
fmt.Printf("\n")
|
||||
}
|
||||
|
||||
return p
|
||||
if final {
|
||||
fmt.Print("\n")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
BIN
cmd/restic/testdata/repo-unused-data-missing.tar.gz
vendored
Normal file
BIN
cmd/restic/testdata/repo-unused-data-missing.tar.gz
vendored
Normal file
Binary file not shown.
@@ -238,6 +238,20 @@ after the bucket name like this:
|
||||
For an S3-compatible server that is not Amazon (like Minio, see below),
|
||||
or is only available via HTTP, you can specify the URL to the server
|
||||
like this: ``s3:http://server:port/bucket_name``.
|
||||
|
||||
.. note:: restic expects `path-style URLs <https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro>`__
|
||||
like for example ``s3.us-west-2.amazonaws.com/bucket_name``.
|
||||
Virtual-hosted–style URLs like ``bucket_name.s3.us-west-2.amazonaws.com``,
|
||||
where the bucket name is part of the hostname are not supported. These must
|
||||
be converted to path-style URLs instead, for example ``s3.us-west-2.amazonaws.com/bucket_name``.
|
||||
|
||||
.. note:: Certain S3-compatible servers do not properly implement the
|
||||
``ListObjectsV2`` API, most notably Ceph versions before v14.2.5. On these
|
||||
backends, as a temporary workaround, you can provide the
|
||||
``-o s3.list-objects-v1=true`` option to use the older
|
||||
``ListObjects`` API instead. This option may be removed in future
|
||||
versions of restic.
|
||||
|
||||
|
||||
Minio Server
|
||||
************
|
||||
@@ -299,6 +313,46 @@ this command.
|
||||
Please note that knowledge of your password is required to access
|
||||
the repository. Losing your password means that your data is irrecoverably lost.
|
||||
|
||||
Alibaba Cloud (Aliyun) Object Storage System (OSS)
|
||||
**************************************************
|
||||
|
||||
`Alibaba OSS <https://www.alibabacloud.com/product/oss/>`__ is an
|
||||
encrypted, secure, cost-effective, and easy-to-use object storage
|
||||
service that enables you to store, back up, and archive large amounts
|
||||
of data in the cloud.
|
||||
|
||||
Alibaba OSS is S3 compatible so it can be used as a storage provider
|
||||
for a restic repository with a couple of extra parameters.
|
||||
|
||||
- Determine the correct `Alibaba OSS region endpoint <https://www.alibabacloud.com/help/doc-detail/31837.htm>`__ - this will be something like ``oss-eu-west-1.aliyuncs.com``
|
||||
- You'll need the region name too - this will be something like ``oss-eu-west-1``
|
||||
|
||||
You must first setup the following environment variables with the
|
||||
credentials of your Alibaba OSS account.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export AWS_ACCESS_KEY_ID=<YOUR-OSS-ACCESS-KEY-ID>
|
||||
$ export AWS_SECRET_ACCESS_KEY=<YOUR-OSS-SECRET-ACCESS-KEY>
|
||||
|
||||
Now you can easily initialize restic to use Alibaba OSS as a backend with
|
||||
this command.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ./restic -o s3.bucket-lookup=dns -o s3.region=<OSS-REGION> -r s3:https://<OSS-ENDPOINT>/<OSS-BUCKET-NAME> init
|
||||
enter password for new backend:
|
||||
enter password again:
|
||||
created restic backend xxxxxxxxxx at s3:https://<OSS-ENDPOINT>/<OSS-BUCKET-NAME>
|
||||
Please note that knowledge of your password is required to access
|
||||
the repository. Losing your password means that your data is irrecoverably lost.
|
||||
|
||||
For example with an actual endpoint:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -o s3.bucket-lookup=dns -o s3.region=oss-eu-west-1 -r s3:https://oss-eu-west-1.aliyuncs.com/bucketname init
|
||||
|
||||
OpenStack Swift
|
||||
***************
|
||||
|
||||
@@ -326,10 +380,14 @@ the naming convention of those variables follows the official Python Swift clien
|
||||
$ export OS_AUTH_URL=<MY_AUTH_URL>
|
||||
$ export OS_REGION_NAME=<MY_REGION_NAME>
|
||||
$ export OS_USERNAME=<MY_USERNAME>
|
||||
$ export OS_USER_ID=<MY_USER_ID>
|
||||
$ export OS_PASSWORD=<MY_PASSWORD>
|
||||
$ export OS_USER_DOMAIN_NAME=<MY_DOMAIN_NAME>
|
||||
$ export OS_USER_DOMAIN_ID=<MY_DOMAIN_ID>
|
||||
$ export OS_PROJECT_NAME=<MY_PROJECT_NAME>
|
||||
$ export OS_PROJECT_DOMAIN_NAME=<MY_PROJECT_DOMAIN_NAME>
|
||||
$ export OS_PROJECT_DOMAIN_ID=<MY_PROJECT_DOMAIN_ID>
|
||||
$ export OS_TRUST_ID=<MY_TRUST_ID>
|
||||
|
||||
# For keystone v3 application credential authentication (application credential id)
|
||||
$ export OS_AUTH_URL=<MY_AUTH_URL>
|
||||
@@ -552,10 +610,9 @@ For debugging rclone, you can set the environment variable ``RCLONE_VERBOSE=2``.
|
||||
The rclone backend has two additional options:
|
||||
|
||||
* ``-o rclone.program`` specifies the path to rclone, the default value is just ``rclone``
|
||||
* ``-o rclone.args`` allows setting the arguments passed to rclone, by default this is ``serve restic --stdio --b2-hard-delete --drive-use-trash=false``
|
||||
* ``-o rclone.args`` allows setting the arguments passed to rclone, by default this is ``serve restic --stdio --b2-hard-delete``
|
||||
|
||||
The reason for the two last parameters (``--b2-hard-delete`` and
|
||||
``--drive-use-trash=false``) can be found in the corresponding GitHub `issue #1657`_.
|
||||
The reason for the ``--b2-hard-delete`` parameters can be found in the corresponding GitHub `issue #1657`_.
|
||||
|
||||
In order to start rclone, restic will build a list of arguments by joining the
|
||||
following lists (in this order): ``rclone.program``, ``rclone.args`` and as the
|
||||
@@ -581,7 +638,17 @@ rclone e.g. via SSH on a server, for example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -o rclone.program="ssh user@host rclone" -r rclone:b2:foo/bar
|
||||
$ restic -o rclone.program="ssh user@remotehost rclone" -r rclone:b2:foo/bar
|
||||
|
||||
With these options, restic works with local files. It uses rclone and
|
||||
credentials stored on ``remotehost`` to communicate with B2. All data (except
|
||||
credentials) is encrypted/decrypted locally, then sent/received via
|
||||
``remotehost`` to/from B2.
|
||||
|
||||
A more advanced version of this setup forbids specific hosts from removing
|
||||
files in a repository. See the `blog post by Simon Ruderich
|
||||
<https://ruderich.org/simon/notes/append-only-backups-with-restic-and-rclone>`_
|
||||
for details.
|
||||
|
||||
The rclone command may also be hard-coded in the SSH configuration or the
|
||||
user's public key, in this case it may be sufficient to just start the SSH
|
||||
|
||||
@@ -131,24 +131,62 @@ restic encounters:
|
||||
In fact several hosts may use the same repository to backup directories
|
||||
and files leading to a greater de-duplication.
|
||||
|
||||
Please be aware that when you backup different directories (or the
|
||||
directories to be saved have a variable name component like a
|
||||
time/date), restic always needs to read all files and only afterwards
|
||||
can compute which parts of the files need to be saved. When you backup
|
||||
the same directory again (maybe with new or changed files) restic will
|
||||
find the old snapshot in the repo and by default only reads those files
|
||||
that are new or have been modified since the last snapshot. This is
|
||||
decided based on the following attributes of the file in the file system:
|
||||
|
||||
* Type (file, symlink, or directory?)
|
||||
* Modification time
|
||||
* Size
|
||||
* Inode number (internal number used to reference a file in a file system)
|
||||
|
||||
Now is a good time to run ``restic check`` to verify that all data
|
||||
is properly stored in the repository. You should run this command regularly
|
||||
to make sure the internal structure of the repository is free of errors.
|
||||
|
||||
File change detection
|
||||
*********************
|
||||
|
||||
When restic encounters a file that has already been backed up, whether in the
|
||||
current backup or a previous one, it makes sure the file's contents are only
|
||||
stored once in the repository. To do so, it normally has to scan the entire
|
||||
contents of every file. Because this can be very expensive, restic also uses a
|
||||
change detection rule based on file metadata to determine whether a file is
|
||||
likely unchanged since a previous backup. If it is, the file is not scanned
|
||||
again.
|
||||
|
||||
Change detection is only performed for regular files (not special files,
|
||||
symlinks or directories) that have the exact same path as they did in a
|
||||
previous backup of the same location. If a file or one of its containing
|
||||
directories was renamed, it is considered a different file and its entire
|
||||
contents will be scanned again.
|
||||
|
||||
Metadata changes (permissions, ownership, etc.) are always included in the
|
||||
backup, even if file contents are considered unchanged.
|
||||
|
||||
On **Unix** (including Linux and Mac), given that a file lives at the same
|
||||
location as a file in a previous backup, the following file metadata
|
||||
attributes have to match for its contents to be presumed unchanged:
|
||||
|
||||
* Modification timestamp (mtime).
|
||||
* Metadata change timestamp (ctime).
|
||||
* File size.
|
||||
* Inode number (internal number used to reference a file in a filesystem).
|
||||
|
||||
The reason for requiring both mtime and ctime to match is that Unix programs
|
||||
can freely change mtime (and some do). In such cases, a ctime change may be
|
||||
the only hint that a file did change.
|
||||
|
||||
The following ``restic backup`` command line flags modify the change detection
|
||||
rules:
|
||||
|
||||
* ``--force``: turn off change detection and rescan all files.
|
||||
* ``--ignore-ctime``: require mtime to match, but allow ctime to differ.
|
||||
* ``--ignore-inode``: require mtime to match, but allow inode number
|
||||
and ctime to differ.
|
||||
|
||||
The option ``--ignore-inode`` exists to support FUSE-based filesystems and
|
||||
pCloud, which do not assign stable inodes to files.
|
||||
|
||||
Note that the device id of the containing mount point is never taken into
|
||||
account. Device numbers are not stable for removable devices and ZFS snapshots.
|
||||
If you want to force a re-scan in such a case, you can change the mountpoint.
|
||||
|
||||
On **Windows**, a file is considered unchanged when its path and modification
|
||||
time match, and only ``--force`` has any effect. The other options are
|
||||
recognized but ignored.
|
||||
|
||||
Excluding Files
|
||||
***************
|
||||
|
||||
@@ -276,36 +314,56 @@ suffix the size value with one of ``k``/``K`` for kilobytes, ``m``/``M`` for meg
|
||||
Including Files
|
||||
***************
|
||||
|
||||
By using the ``--files-from`` option you can read the files you want to back
|
||||
up from one or more files. This is especially useful if a lot of files have
|
||||
to be backed up that are not in the same folder or are maybe pre-filtered by
|
||||
other software.
|
||||
The options ``--files-from``, ``--files-from-verbatim`` and ``--files-from-raw``
|
||||
allow you to list files that should be backed up in a file, rather than on the
|
||||
command line. This is useful when a lot of files have to be backed up that are
|
||||
not in the same folder.
|
||||
|
||||
For example maybe you want to backup files which have a name that matches a
|
||||
certain pattern:
|
||||
The argument passed to ``--files-from`` must be the name of a text file that
|
||||
contains one pattern per line. The file must be encoded as UTF-8, or UTF-16
|
||||
with a byte-order mark. Leading and trailing whitespace is removed from the
|
||||
patterns. Empty lines and lines starting with a ``#`` are ignored.
|
||||
The patterns are expanded, when the file is read, by the Go function
|
||||
`filepath.Glob <https://golang.org/pkg/path/filepath/#Glob>`__.
|
||||
|
||||
The option ``--files-from-verbatim`` has the same behavior as ``--files-from``,
|
||||
except that it contains literal filenames. It does expand patterns; filenames
|
||||
are listed verbatim. Lines starting with a ``#`` are not ignored; leading and
|
||||
trailing whitespace is not trimmed off. Empty lines are still allowed, so that
|
||||
files can be grouped.
|
||||
|
||||
``--files-from-raw`` is a third variant that requires filenames to be terminated
|
||||
by a zero byte (the NUL character), so that it can even handle filenames that
|
||||
contain newlines or are not encoded as UTF-8 (except on Windows, where the
|
||||
listed filenames must still be encoded in UTF-8).
|
||||
|
||||
This option is the safest choice when generating filename lists from a script.
|
||||
Its file format is the output format generated by GNU find's ``-print0`` option.
|
||||
|
||||
All three arguments interpret the argument ``-`` as standard input.
|
||||
|
||||
In all cases, paths may be absolute or relative to ``restic backup``'s
|
||||
working directory.
|
||||
|
||||
For example, maybe you want to backup files which have a name that matches a
|
||||
certain regular expression pattern (uses GNU find):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ find /tmp/somefiles | grep 'PATTERN' > /tmp/files_to_backup
|
||||
$ find /tmp/somefiles -regex PATTERN -print0 > /tmp/files_to_backup
|
||||
|
||||
You can then use restic to backup the filtered files:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -r /srv/restic-repo backup --files-from /tmp/files_to_backup
|
||||
$ restic -r /srv/restic-repo backup --files-from-raw /tmp/files_to_backup
|
||||
|
||||
Incidentally you can also combine ``--files-from`` with the normal files
|
||||
args:
|
||||
You can combine all three options with each other and with the normal file arguments:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -r /srv/restic-repo backup --files-from /tmp/files_to_backup /tmp/some_additional_file
|
||||
|
||||
Paths in the listing file can be absolute or relative. Please note that
|
||||
patterns listed in a ``--files-from`` file are treated the same way as
|
||||
exclude patterns are, which means that beginning and trailing spaces are
|
||||
trimmed and special characters must be escaped. See the documentation
|
||||
above for more information.
|
||||
$ restic backup --files-from /tmp/files_to_backup /tmp/some_additional_file
|
||||
$ restic backup --files-from /tmp/glob-pattern --files-from-raw /tmp/generated-list /tmp/some_additional_file
|
||||
|
||||
Comparing Snapshots
|
||||
*******************
|
||||
@@ -352,10 +410,6 @@ written, and the next backup needs to write new metadata again. If you really
|
||||
want to save the access time for files and directories, you can pass the
|
||||
``--with-atime`` option to the ``backup`` command.
|
||||
|
||||
In filesystems that do not support inode consistency, like FUSE-based ones and pCloud, it is
|
||||
possible to ignore inode on changed files comparison by passing ``--ignore-inode`` to
|
||||
``backup`` command.
|
||||
|
||||
Reading data from stdin
|
||||
***********************
|
||||
|
||||
@@ -446,13 +500,17 @@ environment variables. The following lists these environment variables:
|
||||
OS_AUTH_URL Auth URL for keystone authentication
|
||||
OS_REGION_NAME Region name for keystone authentication
|
||||
OS_USERNAME Username for keystone authentication
|
||||
OS_USER_ID User ID for keystone v3 authentication
|
||||
OS_PASSWORD Password for keystone authentication
|
||||
OS_TENANT_ID Tenant ID for keystone v2 authentication
|
||||
OS_TENANT_NAME Tenant name for keystone v2 authentication
|
||||
|
||||
OS_USER_DOMAIN_NAME User domain name for keystone authentication
|
||||
OS_USER_DOMAIN_ID User domain ID for keystone v3 authentication
|
||||
OS_PROJECT_NAME Project name for keystone authentication
|
||||
OS_PROJECT_DOMAIN_NAME Project domain name for keystone authentication
|
||||
OS_PROJECT_DOMAIN_ID Project domain ID for keystone v3 authentication
|
||||
OS_TRUST_ID Trust ID for keystone v3 authentication
|
||||
|
||||
OS_APPLICATION_CREDENTIAL_ID Application Credential ID (keystone v3)
|
||||
OS_APPLICATION_CREDENTIAL_NAME Application Credential Name (keystone v3)
|
||||
|
||||
@@ -106,12 +106,16 @@ The example command copies all snapshots from the source repository
|
||||
Snapshots which have previously been copied between repositories will
|
||||
be skipped by later copy runs.
|
||||
|
||||
.. note:: Note that this process will have to read (download) and write (upload) the
|
||||
entire snapshot(s) due to the different encryption keys used in the source and
|
||||
destination repository. Also, the transferred files are not re-chunked, which
|
||||
may break deduplication between files already stored in the destination repo
|
||||
and files copied there using this command. See the next section for how to avoid
|
||||
this problem.
|
||||
.. important:: This process will have to both download (read) and upload (write)
|
||||
the entire snapshot(s) due to the different encryption keys used in the
|
||||
source and destination repository. This *may incur higher bandwidth usage
|
||||
and costs* than expected during normal backup runs.
|
||||
|
||||
.. important:: The copying process does not re-chunk files, which may break
|
||||
deduplication between the files copied and files already stored in the
|
||||
destination repository. This means that copied files, which existed in
|
||||
both the source and destination repository, *may occupy up to twice their
|
||||
space* in the destination repository. See below for how to avoid this.
|
||||
|
||||
For the destination repository ``--repo2`` the password can be read from
|
||||
a file ``--password-file2`` or from a command ``--password-command2``.
|
||||
@@ -121,12 +125,15 @@ pass the password via ``$RESTIC_PASSWORD2``. The key which should be used
|
||||
for decryption can be selected by passing its ID via the flag ``--key-hint2``
|
||||
or the environment variable ``$RESTIC_KEY_HINT2``.
|
||||
|
||||
In case the source and destination repository use the same backend, then
|
||||
configuration options and environment variables to configure the backend
|
||||
apply to both repositories. For example it might not be possible to specify
|
||||
different accounts for the source and destination repository. You can
|
||||
avoid this limitation by using the rclone backend along with remotes which
|
||||
are configured in rclone.
|
||||
.. note:: In case the source and destination repository use the same backend,
|
||||
the configuration options and environment variables used to configure the
|
||||
backend may apply to both repositories – for example it might not be
|
||||
possible to specify different accounts for the source and destination
|
||||
repository. You can avoid this limitation by using the rclone backend
|
||||
along with remotes which are configured in rclone.
|
||||
|
||||
Filtering snapshots to copy
|
||||
---------------------------
|
||||
|
||||
The list of snapshots to copy can be filtered by host, path in the backup
|
||||
and / or a comma-separated tag list:
|
||||
@@ -142,7 +149,6 @@ which case only these instead of all snapshots will be copied:
|
||||
|
||||
$ restic -r /srv/restic-repo copy --repo2 /srv/restic-repo-copy 410b18a2 4e5d5487 latest
|
||||
|
||||
|
||||
Ensuring deduplication for copied snapshots
|
||||
-------------------------------------------
|
||||
|
||||
@@ -238,12 +244,17 @@ integrity of the pack files in the repository, use the ``--read-data`` flag:
|
||||
repository, beware that it might incur higher bandwidth costs than usual
|
||||
and also that it takes more time than the default ``check``.
|
||||
|
||||
Alternatively, use the ``--read-data-subset=n/t`` parameter to check only a
|
||||
subset of the repository pack files at a time. The parameter takes two values,
|
||||
``n`` and ``t``. When the check command runs, all pack files in the repository
|
||||
are logically divided in ``t`` (roughly equal) groups, and only files that
|
||||
belong to group number ``n`` are checked. For example, the following commands
|
||||
check all repository pack files over 5 separate invocations:
|
||||
Alternatively, use the ``--read-data-subset`` parameter to check only a
|
||||
subset of the repository pack files at a time. It supports two ways to select a
|
||||
subset. One selects a specific range of pack files, the other selects a random
|
||||
percentage of pack files.
|
||||
|
||||
Use ``--read-data-subset=n/t`` to check only a subset of the repository pack
|
||||
files at a time. The parameter takes two values, ``n`` and ``t``. When the check
|
||||
command runs, all pack files in the repository are logically divided in ``t``
|
||||
(roughly equal) groups, and only files that belong to group number ``n`` are
|
||||
checked. For example, the following commands check all repository pack files
|
||||
over 5 separate invocations:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@@ -252,3 +263,21 @@ check all repository pack files over 5 separate invocations:
|
||||
$ restic -r /srv/restic-repo check --read-data-subset=3/5
|
||||
$ restic -r /srv/restic-repo check --read-data-subset=4/5
|
||||
$ restic -r /srv/restic-repo check --read-data-subset=5/5
|
||||
|
||||
Use ``--read-data-subset=n%`` to check a randomly choosen subset of the
|
||||
repository pack files. It takes one parameter, ``n``, the percentage of pack
|
||||
files to check as an integer or floating point number. This will not guarantee
|
||||
to cover all available pack files after sufficient runs, but it is easy to
|
||||
automate checking a small subset of data after each backup. For a floating point
|
||||
value the following command may be used:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -r /srv/restic-repo check --read-data-subset=2.5%
|
||||
|
||||
When checking bigger subsets you most likely specify the percentage as an
|
||||
integer:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -r /srv/restic-repo check --read-data-subset=10%
|
||||
|
||||
@@ -128,10 +128,13 @@ e.g.:
|
||||
|
||||
It is also possible to ``dump`` the contents of a whole folder structure to
|
||||
stdout. To retain the information about the files and folders Restic will
|
||||
output the contents in the tar format:
|
||||
output the contents in the tar (default) or zip format:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -r /srv/restic-repo dump latest /home/other/work > restore.tar
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -r /srv/restic-repo dump -a zip latest /home/other/work > restore.zip
|
||||
|
||||
|
||||
@@ -23,12 +23,11 @@ data that was referenced by the snapshot from the repository. This can
|
||||
be automated with the ``--prune`` option of the ``forget`` command,
|
||||
which runs ``prune`` automatically if snapshots have been removed.
|
||||
|
||||
.. Warning::
|
||||
|
||||
Pruning snapshots can be a very time-consuming process, taking nearly
|
||||
as long as backups themselves. During a prune operation, the index is
|
||||
locked and backups cannot be completed. Performance improvements are
|
||||
planned for this feature.
|
||||
Pruning snapshots can be a time-consuming process, depending on the
|
||||
amount of snapshots and data to process. During a prune operation, the
|
||||
repository is locked and backups cannot be completed. Please plan your
|
||||
pruning so that there's time to complete it and it doesn't interfere with
|
||||
regular backup runs.
|
||||
|
||||
It is advisable to run ``restic check`` after pruning, to make sure
|
||||
you are alerted, should the internal data structures of the repository
|
||||
@@ -82,20 +81,30 @@ command must be run:
|
||||
|
||||
$ restic -r /srv/restic-repo prune
|
||||
enter password for repository:
|
||||
|
||||
counting files in repo
|
||||
building new index for repo
|
||||
[0:00] 100.00% 22 / 22 files
|
||||
repository contains 22 packs (8512 blobs) with 100.092 MiB bytes
|
||||
processed 8512 blobs: 0 duplicate blobs, 0B duplicate
|
||||
load all snapshots
|
||||
find data that is still in use for 1 snapshots
|
||||
[0:00] 100.00% 1 / 1 snapshots
|
||||
found 8433 of 8512 data blobs still in use
|
||||
will rewrite 3 packs
|
||||
creating new index
|
||||
[0:00] 86.36% 19 / 22 files
|
||||
saved new index as 544a5084
|
||||
repository 33002c5e opened successfully, password is correct
|
||||
loading all snapshots...
|
||||
loading indexes...
|
||||
finding data that is still in use for 4 snapshots
|
||||
[0:00] 100.00% 4 / 4 snapshots
|
||||
searching used packs...
|
||||
collecting packs for deletion and repacking
|
||||
[0:00] 100.00% 5 / 5 packs processed
|
||||
|
||||
to repack: 69 blobs / 1.078 MiB
|
||||
this removes 67 blobs / 1.047 MiB
|
||||
to delete: 7 blobs / 25.726 KiB
|
||||
total prune: 74 blobs / 1.072 MiB
|
||||
remaining: 16 blobs / 38.003 KiB
|
||||
unused size after prune: 0 B (0.00% of remaining size)
|
||||
|
||||
repacking packs
|
||||
[0:00] 100.00% 2 / 2 packs repacked
|
||||
rebuilding index
|
||||
[0:00] 100.00% 3 / 3 packs processed
|
||||
deleting obsolete index files
|
||||
[0:00] 100.00% 3 / 3 files deleted
|
||||
removing 3 old packs
|
||||
[0:00] 100.00% 3 / 3 files deleted
|
||||
done
|
||||
|
||||
Afterwards the repository is smaller.
|
||||
@@ -119,19 +128,29 @@ to ``forget``:
|
||||
8c02b94b 2017-02-21 10:48:33 mopped /home/user/work
|
||||
|
||||
1 snapshots have been removed, running prune
|
||||
counting files in repo
|
||||
building new index for repo
|
||||
[0:00] 100.00% 37 / 37 packs
|
||||
repository contains 37 packs (5521 blobs) with 151.012 MiB bytes
|
||||
processed 5521 blobs: 0 duplicate blobs, 0B duplicate
|
||||
load all snapshots
|
||||
find data that is still in use for 1 snapshots
|
||||
loading all snapshots...
|
||||
loading indexes...
|
||||
finding data that is still in use for 1 snapshots
|
||||
[0:00] 100.00% 1 / 1 snapshots
|
||||
found 5323 of 5521 data blobs still in use, removing 198 blobs
|
||||
will delete 0 packs and rewrite 27 packs, this frees 22.106 MiB
|
||||
creating new index
|
||||
[0:00] 100.00% 30 / 30 packs
|
||||
saved new index as b49f3e68
|
||||
searching used packs...
|
||||
collecting packs for deletion and repacking
|
||||
[0:00] 100.00% 5 / 5 packs processed
|
||||
|
||||
to repack: 69 blobs / 1.078 MiB
|
||||
this removes 67 blobs / 1.047 MiB
|
||||
to delete: 7 blobs / 25.726 KiB
|
||||
total prune: 74 blobs / 1.072 MiB
|
||||
remaining: 16 blobs / 38.003 KiB
|
||||
unused size after prune: 0 B (0.00% of remaining size)
|
||||
|
||||
repacking packs
|
||||
[0:00] 100.00% 2 / 2 packs repacked
|
||||
rebuilding index
|
||||
[0:00] 100.00% 3 / 3 packs processed
|
||||
deleting obsolete index files
|
||||
[0:00] 100.00% 3 / 3 files deleted
|
||||
removing 3 old packs
|
||||
[0:00] 100.00% 3 / 3 files deleted
|
||||
done
|
||||
|
||||
Removing snapshots according to a policy
|
||||
@@ -172,6 +191,10 @@ The ``forget`` command accepts the following parameters:
|
||||
made in the two years, five months, seven days, and three hours before the
|
||||
latest snapshot.
|
||||
|
||||
.. note:: All calendar related ``--keep-*`` options work on the natural time
|
||||
boundaries and not relative to when you run the ``forget`` command. Weeks
|
||||
are Monday 00:00 -> Sunday 23:59, days 00:00 to 23:59, hours :00 to :59, etc.
|
||||
|
||||
Multiple policies will be ORed together so as to be as inclusive as possible
|
||||
for keeping snapshots.
|
||||
|
||||
@@ -282,3 +305,59 @@ last-day-of-the-months (11 or 12 depends if the 5 weeklies cross a month).
|
||||
And finally 75 last-day-of-the-year snapshots. All other snapshots are
|
||||
removed.
|
||||
|
||||
Customize pruning
|
||||
*****************
|
||||
|
||||
To understand the custom options, we first explain how the pruning process works:
|
||||
|
||||
1. All snapshots and directories within snapshots are scanned to determine
|
||||
which data is still in use.
|
||||
2. For all files in the repository, restic finds out if the file is fully
|
||||
used, partly used or completely unused.
|
||||
3. Completely unused files are marked for deletion. Fully used files are kept.
|
||||
A partially used file is either kept or marked for repacking depending on user
|
||||
options.
|
||||
|
||||
Note that for repacking, restic must download the file from the repository
|
||||
storage and re-upload the needed data in the repository. This can be very
|
||||
time-consuming for remote repositories.
|
||||
4. After deciding what to do, ``prune`` will actually perform the repack, modify
|
||||
the index according to the changes and delete the obsolete files.
|
||||
|
||||
The ``prune`` command accepts the following options:
|
||||
|
||||
- ``--max-unused limit`` allow unused data up to the specified limit within the repository.
|
||||
This allows restic to keep partly used files instead of repacking them.
|
||||
|
||||
The limit can be specified in several ways:
|
||||
|
||||
* As an absolute size (e.g. ``200M``). If you want to minimize the space
|
||||
used by your repository, pass ``0`` to this option.
|
||||
* As a size relative to the total repo size (e.g. ``10%``). This means that
|
||||
after prune, at most ``10%`` of the total data stored in the repo may be
|
||||
unused data. If the repo after prune has as size of 500MB, then at most
|
||||
50MB may be unused.
|
||||
* If the string ``unlimited`` is passed, there is no limit for partly
|
||||
unused files. This means that as long as some data is still used within
|
||||
a file stored in the repo, restic will just leave it there. Use this if
|
||||
you want to minimize the time and bandwidth used by the ``prune``
|
||||
operation.
|
||||
|
||||
Restic tries to repack as little data as possible while still ensuring this
|
||||
limit for unused data.
|
||||
|
||||
- ``--max-repack-size size`` if set limits the total size of files to repack.
|
||||
As ``prune`` first stores all repacked files and deletes the obsolete files at the end,
|
||||
this option might be handy if you expect many files to be repacked and fear to run low
|
||||
on storage.
|
||||
|
||||
- ``--repack-cacheable-only`` if set to true only files which contain
|
||||
metadata and would be stored in the cache are repacked. Other pack files are
|
||||
not repacked if this option is set. This allows a very fast repacking
|
||||
using only cached data. It can, however, imply that the unused data in
|
||||
your repository exceeds the value given by ``--max-unused``.
|
||||
The default value is false.
|
||||
|
||||
- ``--dry-run`` only show what ``prune`` would do.
|
||||
|
||||
- ``--verbose`` increased verbosity shows additional statistics for ``prune``.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user