mirror of
https://github.com/restic/restic.git
synced 2026-03-26 16:02:43 +00:00
Compare commits
272 Commits
debug-chun
...
v0.10.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
40832b2927 | ||
|
|
c8a94eced7 | ||
|
|
ee6e981b4e | ||
|
|
96fd982f6a | ||
|
|
6ff0082c02 | ||
|
|
95c1d7d959 | ||
|
|
f003410402 | ||
|
|
655430550b | ||
|
|
1823b8195c | ||
|
|
311ad2d2d0 | ||
|
|
a10b44a265 | ||
|
|
baf3a9aa3b | ||
|
|
ffe6dce7e7 | ||
|
|
8ce0ce387f | ||
|
|
3c44598bf6 | ||
|
|
3bb55fd6bf | ||
|
|
36efefa7bd | ||
|
|
ac4b8c98ac | ||
|
|
4dcd6abf37 | ||
|
|
cb3f531050 | ||
|
|
23fcbb275a | ||
|
|
6e3215a80d | ||
|
|
b10dce541e | ||
|
|
4f221c4022 | ||
|
|
f5c448aa65 | ||
|
|
c0fc85d303 | ||
|
|
0c48e515f0 | ||
|
|
97950ab81a | ||
|
|
59fca85844 | ||
|
|
e207257714 | ||
|
|
82e1cbed4f | ||
|
|
8903b6c88a | ||
|
|
93583c01b1 | ||
|
|
88664ba222 | ||
|
|
121233e1b3 | ||
|
|
8cc9514879 | ||
|
|
2e7d475029 | ||
|
|
d3a286928a | ||
|
|
c46edcd9d6 | ||
|
|
1ede018ea6 | ||
|
|
b77e933d80 | ||
|
|
d0329cf3eb | ||
|
|
dc31529fc3 | ||
|
|
4784540f04 | ||
|
|
84ea2389ae | ||
|
|
b4a7ce86cf | ||
|
|
7ee0964880 | ||
|
|
f6f11400c2 | ||
|
|
e2dc5034d3 | ||
|
|
9a1b3cb5d9 | ||
|
|
b22655367c | ||
|
|
068a3ce23f | ||
|
|
ee05501ce7 | ||
|
|
014600bee6 | ||
|
|
d9a80e07b9 | ||
|
|
d19f05c960 | ||
|
|
460e2ffbf6 | ||
|
|
49b6aac3fa | ||
|
|
2f8335554c | ||
|
|
37113282ca | ||
|
|
337725c354 | ||
|
|
2ddb7ffb7e | ||
|
|
81dcfea11a | ||
|
|
55071ee367 | ||
|
|
dcf9ded977 | ||
|
|
bcf44a9c3f | ||
|
|
a7b4c19abf | ||
|
|
d3fcfeba3a | ||
|
|
e69449bf2c | ||
|
|
da4193c3ef | ||
|
|
fe6445e0f4 | ||
|
|
ea81a0e282 | ||
|
|
80a11960dd | ||
|
|
d6f739ec22 | ||
|
|
b98598e55f | ||
|
|
d5f86effa1 | ||
|
|
c34c731698 | ||
|
|
412623b848 | ||
|
|
bbe8b73f03 | ||
|
|
91e8d998cd | ||
|
|
9a4796594a | ||
|
|
15374d22e9 | ||
|
|
88ad58d6cd | ||
|
|
591a8c4cdf | ||
|
|
ec9a53b7e8 | ||
|
|
34a3adfd8d | ||
|
|
9867c4bbb4 | ||
|
|
efb4a981cf | ||
|
|
2447f3f110 | ||
|
|
b25978a53c | ||
|
|
b0a8c4ad6c | ||
|
|
908b23fda0 | ||
|
|
4508d406ef | ||
|
|
7048cc3e58 | ||
|
|
eb7c00387c | ||
|
|
bc0501d72c | ||
|
|
17995dec7a | ||
|
|
e915cedc3d | ||
|
|
cdcaecd27d | ||
|
|
b43ab67a22 | ||
|
|
7ddfd6cabe | ||
|
|
b1b3f1ecb6 | ||
|
|
fa135f72bf | ||
|
|
51c22f4223 | ||
|
|
1a5b66f33b | ||
|
|
da6a34e044 | ||
|
|
fe69b83074 | ||
|
|
08d24ff99e | ||
|
|
d8b80e9862 | ||
|
|
1c84aceb39 | ||
|
|
575ed9a47e | ||
|
|
8f811642c3 | ||
|
|
f4b9544ab2 | ||
|
|
367449dede | ||
|
|
7042bafea5 | ||
|
|
744a15247d | ||
|
|
3ba19869be | ||
|
|
0fed6a8dfc | ||
|
|
643bbbe156 | ||
|
|
eca0f0ad24 | ||
|
|
08dee8a52b | ||
|
|
b112533812 | ||
|
|
5e63294355 | ||
|
|
84b6f1ec53 | ||
|
|
06fb4ea3f0 | ||
|
|
e38d415173 | ||
|
|
d81a396944 | ||
|
|
0b21ec44b7 | ||
|
|
bd36731119 | ||
|
|
38a2f9c07b | ||
|
|
5af2815627 | ||
|
|
b55de2260d | ||
|
|
9be4fe3e84 | ||
|
|
05116e4787 | ||
|
|
04f79b9642 | ||
|
|
8b358935a0 | ||
|
|
66d089e239 | ||
|
|
49d3efe547 | ||
|
|
9762bec091 | ||
|
|
0eb8553c87 | ||
|
|
d3692f5b81 | ||
|
|
1c0b61204b | ||
|
|
2ee654763b | ||
|
|
b7b479b668 | ||
|
|
4cf9656f12 | ||
|
|
2580eef2aa | ||
|
|
2d7ab9115f | ||
|
|
a178e5628e | ||
|
|
af66a62c04 | ||
|
|
9ea1a78bd4 | ||
|
|
184103647a | ||
|
|
c81b122374 | ||
|
|
48f97f3567 | ||
|
|
3ce9893e0b | ||
|
|
248c7c3828 | ||
|
|
f8316948d1 | ||
|
|
be54ceff66 | ||
|
|
ea97ff1ba4 | ||
|
|
01b9581453 | ||
|
|
3cd927d180 | ||
|
|
bf7b1f12ea | ||
|
|
8554332894 | ||
|
|
3e93b36ca4 | ||
|
|
573a2fb240 | ||
|
|
c847aace35 | ||
|
|
9d1fb94c6c | ||
|
|
020cab8e08 | ||
|
|
07da61baee | ||
|
|
37c95bf5da | ||
|
|
c86d2f23aa | ||
|
|
96ec04d74d | ||
|
|
9c3414374a | ||
|
|
3d530dfc91 | ||
|
|
c43f5b2664 | ||
|
|
38087e40d9 | ||
|
|
bbc960f957 | ||
|
|
309598c237 | ||
|
|
03d23e6faa | ||
|
|
b10acd2af7 | ||
|
|
9175795fdb | ||
|
|
5d8d70542f | ||
|
|
7c23381a2b | ||
|
|
34181b13a2 | ||
|
|
bcd47ec3a2 | ||
|
|
a666a6d576 | ||
|
|
e388d962a5 | ||
|
|
3b7a3711e6 | ||
|
|
9b0e718852 | ||
|
|
82c908871d | ||
|
|
ddf0b8cd0b | ||
|
|
2d0c138c9b | ||
|
|
ef325ffc02 | ||
|
|
0f67ae813a | ||
|
|
7a165f32a9 | ||
|
|
36c69e3ca7 | ||
|
|
35d8413639 | ||
|
|
c66a0e408c | ||
|
|
70f4c014ef | ||
|
|
f0d8710611 | ||
|
|
bd3e280f6d | ||
|
|
2746dcdb5f | ||
|
|
5729d967f5 | ||
|
|
f9f6124558 | ||
|
|
8074879c5f | ||
|
|
7bda28f31f | ||
|
|
255ba83c4b | ||
|
|
7dc200c593 | ||
|
|
9ac90cf5cd | ||
|
|
b84f5177cb | ||
|
|
4cf1c8e8da | ||
|
|
58719e1f47 | ||
|
|
d42c169458 | ||
|
|
8598bb042b | ||
|
|
c6b74962df | ||
|
|
2c72924ffb | ||
|
|
02bec13ef2 | ||
|
|
64976b1a4d | ||
|
|
6a607d6ded | ||
|
|
6fedf1a7f4 | ||
|
|
df946fd9f8 | ||
|
|
4e6a9767de | ||
|
|
1bc80c3c8d | ||
|
|
0fcef2ec23 | ||
|
|
212607dc8a | ||
|
|
190d8e2f51 | ||
|
|
f4cd2a7120 | ||
|
|
aba270df7e | ||
|
|
b5543cff5d | ||
|
|
285b5236c2 | ||
|
|
bb1e258bb7 | ||
|
|
182655bc88 | ||
|
|
74bc7141c1 | ||
|
|
1361341c58 | ||
|
|
ce4a2f4ca6 | ||
|
|
cf979e2b81 | ||
|
|
d92e2c5769 | ||
|
|
7419844885 | ||
|
|
1d66bb9e62 | ||
|
|
0b2c31b05b | ||
|
|
dd7b4f54f5 | ||
|
|
6896c6449b | ||
|
|
735a8074d5 | ||
|
|
70347e95d5 | ||
|
|
0fa3091c78 | ||
|
|
91906911b0 | ||
|
|
fae7f78057 | ||
|
|
ac9ec4b990 | ||
|
|
087c770161 | ||
|
|
6856d1e422 | ||
|
|
8c1261ff02 | ||
|
|
26704be17f | ||
|
|
2c3360db98 | ||
|
|
cba6ad8d8e | ||
|
|
2a3312ac35 | ||
|
|
c35c4e0cbf | ||
|
|
84475aa3a8 | ||
|
|
f12f9ae240 | ||
|
|
5cc1760fdf | ||
|
|
32ac5486e9 | ||
|
|
c4336978eb | ||
|
|
649cbec6c5 | ||
|
|
b17bd7f860 | ||
|
|
68f1e9c524 | ||
|
|
1ee2306033 | ||
|
|
c882a92cd6 | ||
|
|
f54db5d796 | ||
|
|
843e7f404e | ||
|
|
d465b5b9ad | ||
|
|
9f7cd69f13 | ||
|
|
f97a680887 | ||
|
|
42a3db05b0 | ||
|
|
90fc639a67 |
4
.github/ISSUE_TEMPLATE/Feature.md
vendored
4
.github/ISSUE_TEMPLATE/Feature.md
vendored
@@ -39,8 +39,8 @@ Please describe the feature you'd like us to add here.
|
|||||||
-->
|
-->
|
||||||
|
|
||||||
|
|
||||||
What are you trying to do?
|
What are you trying to do? What problem would this solve?
|
||||||
--------------------------
|
---------------------------------------------------------
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
This section should contain a brief description what you're trying to do, which
|
This section should contain a brief description what you're trying to do, which
|
||||||
|
|||||||
10
.github/PULL_REQUEST_TEMPLATE.md
vendored
10
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -10,11 +10,11 @@ your time and add more commits. If you're done and ready for review, please
|
|||||||
check the last box.
|
check the last box.
|
||||||
-->
|
-->
|
||||||
|
|
||||||
What is the purpose of this change? What does it change?
|
What does this PR change? What problem does it solve?
|
||||||
--------------------------------------------------------
|
-----------------------------------------------------
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
Describe the changes here, as detailed as needed.
|
Describe the changes and their purpose here, as detailed as needed.
|
||||||
-->
|
-->
|
||||||
|
|
||||||
Was the change discussed in an issue or in the forum before?
|
Was the change discussed in an issue or in the forum before?
|
||||||
@@ -23,8 +23,8 @@ Was the change discussed in an issue or in the forum before?
|
|||||||
<!--
|
<!--
|
||||||
Link issues and relevant forum posts here.
|
Link issues and relevant forum posts here.
|
||||||
|
|
||||||
If this PR resolves an issue on GitHub, use "closes #1234" so that the issue is
|
If this PR resolves an issue on GitHub, use "Closes #1234" so that the issue
|
||||||
closed automatically when this PR is merged.
|
is closed automatically when this PR is merged.
|
||||||
-->
|
-->
|
||||||
|
|
||||||
Checklist
|
Checklist
|
||||||
|
|||||||
28
.travis.yml
28
.travis.yml
@@ -3,22 +3,6 @@ sudo: false
|
|||||||
|
|
||||||
matrix:
|
matrix:
|
||||||
include:
|
include:
|
||||||
- os: linux
|
|
||||||
go: "1.11.x"
|
|
||||||
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
|
||||||
cache:
|
|
||||||
directories:
|
|
||||||
- $HOME/.cache/go-build
|
|
||||||
- $HOME/gopath/pkg/mod
|
|
||||||
|
|
||||||
- os: linux
|
|
||||||
go: "1.12.x"
|
|
||||||
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
|
||||||
cache:
|
|
||||||
directories:
|
|
||||||
- $HOME/.cache/go-build
|
|
||||||
- $HOME/gopath/pkg/mod
|
|
||||||
|
|
||||||
- os: linux
|
- os: linux
|
||||||
go: "1.13.x"
|
go: "1.13.x"
|
||||||
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
||||||
@@ -27,9 +11,17 @@ matrix:
|
|||||||
- $HOME/.cache/go-build
|
- $HOME/.cache/go-build
|
||||||
- $HOME/gopath/pkg/mod
|
- $HOME/gopath/pkg/mod
|
||||||
|
|
||||||
# only run fuse and cloud backends tests on Travis for the latest Go on Linux
|
|
||||||
- os: linux
|
- os: linux
|
||||||
go: "1.14.x"
|
go: "1.14.x"
|
||||||
|
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
||||||
|
cache:
|
||||||
|
directories:
|
||||||
|
- $HOME/.cache/go-build
|
||||||
|
- $HOME/gopath/pkg/mod
|
||||||
|
|
||||||
|
# only run fuse and cloud backends tests on Travis for the latest Go on Linux
|
||||||
|
- os: linux
|
||||||
|
go: "1.15.x"
|
||||||
sudo: true
|
sudo: true
|
||||||
cache:
|
cache:
|
||||||
directories:
|
directories:
|
||||||
@@ -37,7 +29,7 @@ matrix:
|
|||||||
- $HOME/gopath/pkg/mod
|
- $HOME/gopath/pkg/mod
|
||||||
|
|
||||||
- os: osx
|
- os: osx
|
||||||
go: "1.14.x"
|
go: "1.15.x"
|
||||||
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
||||||
cache:
|
cache:
|
||||||
directories:
|
directories:
|
||||||
|
|||||||
472
CHANGELOG.md
472
CHANGELOG.md
@@ -1,3 +1,467 @@
|
|||||||
|
Changelog for restic 0.10.0 (2020-09-19)
|
||||||
|
=======================================
|
||||||
|
|
||||||
|
The following sections list the changes in restic 0.10.0 relevant to
|
||||||
|
restic users. The changes are ordered by importance.
|
||||||
|
|
||||||
|
Summary
|
||||||
|
-------
|
||||||
|
|
||||||
|
* Fix #1863: Report correct number of directories processed by backup
|
||||||
|
* Fix #2254: Fix tar issues when dumping `/`
|
||||||
|
* Fix #2281: Handle format verbs like '%' properly in `find` output
|
||||||
|
* Fix #2298: Do not hang when run as a background job
|
||||||
|
* Fix #2389: Fix mangled json output of backup command
|
||||||
|
* Fix #2390: Refresh lock timestamp
|
||||||
|
* Fix #2429: Backup --json reports total_bytes_processed as 0
|
||||||
|
* Fix #2469: Fix incorrect bytes stats in `diff` command
|
||||||
|
* Fix #2518: Do not crash with Synology NAS sftp server
|
||||||
|
* Fix #2531: Fix incorrect size calculation in `stats --mode restore-size`
|
||||||
|
* Fix #2537: Fix incorrect file counts in `stats --mode restore-size`
|
||||||
|
* Fix #2592: SFTP backend supports IPv6 addresses
|
||||||
|
* Fix #2607: Honor RESTIC_CACHE_DIR environment variable on Mac and Windows
|
||||||
|
* Fix #2668: Don't abort the stats command when data blobs are missing
|
||||||
|
* Fix #2674: Add stricter prune error checks
|
||||||
|
* Fix #2899: Fix possible crash in the progress bar of check --read-data
|
||||||
|
* Chg #2482: Remove vendored dependencies
|
||||||
|
* Chg #2546: Return exit code 3 when failing to backup all source data
|
||||||
|
* Chg #2600: Update dependencies, require Go >= 1.13
|
||||||
|
* Chg #1597: Honor the --no-lock flag in the mount command
|
||||||
|
* Enh #1570: Support specifying multiple host flags for various commands
|
||||||
|
* Enh #1680: Optimize `restic mount`
|
||||||
|
* Enh #2072: Display snapshot date when using `restic find`
|
||||||
|
* Enh #2175: Allow specifying user and host when creating keys
|
||||||
|
* Enh #2277: Add support for ppc64le
|
||||||
|
* Enh #2395: Ignore sync errors when operation not supported by local filesystem
|
||||||
|
* Enh #2427: Add flag `--iexclude-file` to backup command
|
||||||
|
* Enh #2569: Support excluding files by their size
|
||||||
|
* Enh #2571: Self-heal missing file parts during backup of unchanged files
|
||||||
|
* Enh #2858: Support filtering snapshots by tag and path in the stats command
|
||||||
|
* Enh #323: Add command for copying snapshots between repositories
|
||||||
|
* Enh #551: Use optimized library for hash calculation of file chunks
|
||||||
|
* Enh #2195: Simplify and improve restore performance
|
||||||
|
* Enh #2328: Improve speed of check command
|
||||||
|
* Enh #2423: Support user@domain parsing as user
|
||||||
|
* Enh #2576: Improve the chunking algorithm
|
||||||
|
* Enh #2598: Improve speed of diff command
|
||||||
|
* Enh #2599: Slightly reduce memory usage of prune and stats commands
|
||||||
|
* Enh #2733: S3 backend: Add support for WebIdentityTokenFile
|
||||||
|
* Enh #2773: Optimize handling of new index entries
|
||||||
|
* Enh #2781: Reduce memory consumption of in-memory index
|
||||||
|
* Enh #2786: Optimize `list blobs` command
|
||||||
|
* Enh #2790: Optimized file access in restic mount
|
||||||
|
* Enh #2840: Speed-up file deletion in forget, prune and rebuild-index
|
||||||
|
|
||||||
|
Details
|
||||||
|
-------
|
||||||
|
|
||||||
|
* Bugfix #1863: Report correct number of directories processed by backup
|
||||||
|
|
||||||
|
The directory statistics calculation was fixed to report the actual number of processed
|
||||||
|
directories instead of always zero.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/1863
|
||||||
|
|
||||||
|
* Bugfix #2254: Fix tar issues when dumping `/`
|
||||||
|
|
||||||
|
We've fixed an issue with dumping either `/` or files on the first sublevel e.g. `/foo` to tar.
|
||||||
|
This also fixes tar dumping issues on Windows where this issue could also happen.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2254
|
||||||
|
https://github.com/restic/restic/issues/2357
|
||||||
|
https://github.com/restic/restic/pull/2255
|
||||||
|
|
||||||
|
* Bugfix #2281: Handle format verbs like '%' properly in `find` output
|
||||||
|
|
||||||
|
The JSON or "normal" output of the `find` command can now deal with file names that contain
|
||||||
|
substrings which the Golang `fmt` package considers "format verbs" like `%s`.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2281
|
||||||
|
|
||||||
|
* Bugfix #2298: Do not hang when run as a background job
|
||||||
|
|
||||||
|
Restic did hang on exit while restoring the terminal configuration when it was started as a
|
||||||
|
background job, for example using `restic ... &`. This has been fixed by only restoring the
|
||||||
|
terminal configuration when restic is interrupted while reading a password from the
|
||||||
|
terminal.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2298
|
||||||
|
|
||||||
|
* Bugfix #2389: Fix mangled json output of backup command
|
||||||
|
|
||||||
|
We've fixed a race condition in the json output of the backup command that could cause multiple
|
||||||
|
lines to get mixed up. We've also ensured that the backup summary is printed last.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2389
|
||||||
|
https://github.com/restic/restic/pull/2545
|
||||||
|
|
||||||
|
* Bugfix #2390: Refresh lock timestamp
|
||||||
|
|
||||||
|
Long-running operations did not refresh lock timestamp, resulting in locks becoming stale.
|
||||||
|
This is now fixed.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2390
|
||||||
|
|
||||||
|
* Bugfix #2429: Backup --json reports total_bytes_processed as 0
|
||||||
|
|
||||||
|
We've fixed the json output of total_bytes_processed. The non-json output was already fixed
|
||||||
|
with pull request #2138 but left the json output untouched.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2429
|
||||||
|
|
||||||
|
* Bugfix #2469: Fix incorrect bytes stats in `diff` command
|
||||||
|
|
||||||
|
In some cases, the wrong number of bytes (e.g. 16777215.998 TiB) were reported by the `diff`
|
||||||
|
command. This is now fixed.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2469
|
||||||
|
|
||||||
|
* Bugfix #2518: Do not crash with Synology NAS sftp server
|
||||||
|
|
||||||
|
It was found that when restic is used to store data on an sftp server on a Synology NAS with a
|
||||||
|
relative path (one which does not start with a slash), it may go into an endless loop trying to
|
||||||
|
create directories on the server. We've fixed this bug by using a function in the sftp library
|
||||||
|
instead of our own implementation.
|
||||||
|
|
||||||
|
The bug was discovered because the Synology sftp server behaves erratic with non-absolute
|
||||||
|
path (e.g. `home/restic-repo`). This can be resolved by just using an absolute path instead
|
||||||
|
(`/home/restic-repo`). We've also added a paragraph in the FAQ.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2518
|
||||||
|
https://github.com/restic/restic/issues/2363
|
||||||
|
https://github.com/restic/restic/pull/2530
|
||||||
|
|
||||||
|
* Bugfix #2531: Fix incorrect size calculation in `stats --mode restore-size`
|
||||||
|
|
||||||
|
The restore-size mode of stats was counting hard-linked files as if they were independent.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2531
|
||||||
|
|
||||||
|
* Bugfix #2537: Fix incorrect file counts in `stats --mode restore-size`
|
||||||
|
|
||||||
|
The restore-size mode of stats was failing to count empty directories and some files with hard
|
||||||
|
links.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2537
|
||||||
|
|
||||||
|
* Bugfix #2592: SFTP backend supports IPv6 addresses
|
||||||
|
|
||||||
|
The SFTP backend now supports IPv6 addresses natively, without relying on aliases in the
|
||||||
|
external SSH configuration.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2592
|
||||||
|
|
||||||
|
* Bugfix #2607: Honor RESTIC_CACHE_DIR environment variable on Mac and Windows
|
||||||
|
|
||||||
|
On Mac and Windows, the RESTIC_CACHE_DIR environment variable was ignored. This variable can
|
||||||
|
now be used on all platforms to set the directory where restic stores caches.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2607
|
||||||
|
|
||||||
|
* Bugfix #2668: Don't abort the stats command when data blobs are missing
|
||||||
|
|
||||||
|
Runing the stats command in the blobs-per-file mode on a repository with missing data blobs
|
||||||
|
previously resulted in a crash.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2668
|
||||||
|
|
||||||
|
* Bugfix #2674: Add stricter prune error checks
|
||||||
|
|
||||||
|
Additional checks were added to the prune command in order to improve resiliency to backend,
|
||||||
|
hardware and/or networking issues. The checks now detect a few more cases where such outside
|
||||||
|
factors could potentially cause data loss.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2674
|
||||||
|
|
||||||
|
* Bugfix #2899: Fix possible crash in the progress bar of check --read-data
|
||||||
|
|
||||||
|
We've fixed a possible crash while displaying the progress bar for the check --read-data
|
||||||
|
command. The crash occurred when the length of the progress bar status exceeded the terminal
|
||||||
|
width, which only happened for very narrow terminal windows.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2899
|
||||||
|
https://forum.restic.net/t/restic-rclone-pcloud-connection-issues/2963/15
|
||||||
|
|
||||||
|
* Change #2482: Remove vendored dependencies
|
||||||
|
|
||||||
|
We've removed the vendored dependencies (in the subdir `vendor/`). When building restic, the
|
||||||
|
Go compiler automatically fetches the dependencies. It will also cryptographically verify
|
||||||
|
that the correct code has been fetched by using the hashes in `go.sum` (see the link to the
|
||||||
|
documentation below).
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2482
|
||||||
|
https://golang.org/cmd/go/#hdr-Module_downloading_and_verification
|
||||||
|
|
||||||
|
* Change #2546: Return exit code 3 when failing to backup all source data
|
||||||
|
|
||||||
|
The backup command used to return a zero exit code as long as a snapshot could be created
|
||||||
|
successfully, even if some of the source files could not be read (in which case the snapshot
|
||||||
|
would contain the rest of the files).
|
||||||
|
|
||||||
|
This made it hard for automation/scripts to detect failures/incomplete backups by looking at
|
||||||
|
the exit code. Restic now returns the following exit codes for the backup command:
|
||||||
|
|
||||||
|
- 0 when the command was successful - 1 when there was a fatal error (no snapshot created) - 3 when
|
||||||
|
some source data could not be read (incomplete snapshot created)
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/956
|
||||||
|
https://github.com/restic/restic/issues/2064
|
||||||
|
https://github.com/restic/restic/issues/2526
|
||||||
|
https://github.com/restic/restic/issues/2364
|
||||||
|
https://github.com/restic/restic/pull/2546
|
||||||
|
|
||||||
|
* Change #2600: Update dependencies, require Go >= 1.13
|
||||||
|
|
||||||
|
Restic now requires Go to be at least 1.13. This allows simplifications in the build process and
|
||||||
|
removing workarounds.
|
||||||
|
|
||||||
|
This is also probably the last version of restic still supporting mounting repositories via
|
||||||
|
fuse on macOS. The library we're using for fuse does not support macOS any more and osxfuse is not
|
||||||
|
open source any more.
|
||||||
|
|
||||||
|
https://github.com/bazil/fuse/issues/224
|
||||||
|
https://github.com/osxfuse/osxfuse/issues/590
|
||||||
|
https://github.com/restic/restic/pull/2600
|
||||||
|
https://github.com/restic/restic/pull/2852
|
||||||
|
https://github.com/restic/restic/pull/2927
|
||||||
|
|
||||||
|
* Change #1597: Honor the --no-lock flag in the mount command
|
||||||
|
|
||||||
|
The mount command now does not lock the repository if given the --no-lock flag. This allows to
|
||||||
|
mount repositories which are archived on a read only backend/filesystem.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/1597
|
||||||
|
https://github.com/restic/restic/pull/2821
|
||||||
|
|
||||||
|
* Enhancement #1570: Support specifying multiple host flags for various commands
|
||||||
|
|
||||||
|
Previously commands didn't take more than one `--host` or `-H` argument into account, which
|
||||||
|
could be limiting with e.g. the `forget` command.
|
||||||
|
|
||||||
|
The `dump`, `find`, `forget`, `ls`, `mount`, `restore`, `snapshots`, `stats` and `tag`
|
||||||
|
commands will now take into account multiple `--host` and `-H` flags.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/1570
|
||||||
|
|
||||||
|
* Enhancement #1680: Optimize `restic mount`
|
||||||
|
|
||||||
|
We've optimized the FUSE implementation used within restic. `restic mount` is now more
|
||||||
|
responsive and uses less memory.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/1680
|
||||||
|
https://github.com/restic/restic/pull/2587
|
||||||
|
https://github.com/restic/restic/pull/2787
|
||||||
|
|
||||||
|
* Enhancement #2072: Display snapshot date when using `restic find`
|
||||||
|
|
||||||
|
Added the respective snapshot date to the output of `restic find`.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2072
|
||||||
|
|
||||||
|
* Enhancement #2175: Allow specifying user and host when creating keys
|
||||||
|
|
||||||
|
When adding a new key to the repository, the username and hostname for the new key can be
|
||||||
|
specified on the command line. This allows overriding the defaults, for example if you would
|
||||||
|
prefer to use the FQDN to identify the host or if you want to add keys for several different hosts
|
||||||
|
without having to run the key add command on those hosts.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2175
|
||||||
|
|
||||||
|
* Enhancement #2277: Add support for ppc64le
|
||||||
|
|
||||||
|
Adds support for ppc64le, the processor architecture from IBM.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2277
|
||||||
|
|
||||||
|
* Enhancement #2395: Ignore sync errors when operation not supported by local filesystem
|
||||||
|
|
||||||
|
The local backend has been modified to work with filesystems which doesn't support the `sync`
|
||||||
|
operation. This operation is normally used by restic to ensure that data files are fully
|
||||||
|
written to disk before continuing.
|
||||||
|
|
||||||
|
For these limited filesystems, saving a file in the backend would previously fail with an
|
||||||
|
"operation not supported" error. This error is now ignored, which means that e.g. an SMB mount
|
||||||
|
on macOS can now be used as storage location for a repository.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2395
|
||||||
|
https://forum.restic.net/t/sync-errors-on-mac-over-smb/1859
|
||||||
|
|
||||||
|
* Enhancement #2427: Add flag `--iexclude-file` to backup command
|
||||||
|
|
||||||
|
The backup command now supports the flag `--iexclude-file` which is a case-insensitive
|
||||||
|
version of `--exclude-file`.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2427
|
||||||
|
https://github.com/restic/restic/pull/2898
|
||||||
|
|
||||||
|
* Enhancement #2569: Support excluding files by their size
|
||||||
|
|
||||||
|
The `backup` command now supports the `--exclude-larger-than` option to exclude files which
|
||||||
|
are larger than the specified maximum size. This can for example be useful to exclude
|
||||||
|
unimportant files with a large file size.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2569
|
||||||
|
https://github.com/restic/restic/pull/2914
|
||||||
|
|
||||||
|
* Enhancement #2571: Self-heal missing file parts during backup of unchanged files
|
||||||
|
|
||||||
|
We've improved the resilience of restic to certain types of repository corruption.
|
||||||
|
|
||||||
|
For files that are unchanged since the parent snapshot, the backup command now verifies that
|
||||||
|
all parts of the files still exist in the repository. Parts that are missing, e.g. from a damaged
|
||||||
|
repository, are backed up again. This verification was already run for files that were
|
||||||
|
modified since the parent snapshot, but is now also done for unchanged files.
|
||||||
|
|
||||||
|
Note that restic will not backup file parts that are referenced in the index but where the actual
|
||||||
|
data is not present on disk, as this situation can only be detected by restic check. Please
|
||||||
|
ensure that you run `restic check` regularly.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2571
|
||||||
|
https://github.com/restic/restic/pull/2827
|
||||||
|
|
||||||
|
* Enhancement #2858: Support filtering snapshots by tag and path in the stats command
|
||||||
|
|
||||||
|
We've added filtering snapshots by `--tag tagList` and by `--path path` to the `stats`
|
||||||
|
command. This includes filtering of only 'latest' snapshots or all snapshots in a repository.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2858
|
||||||
|
https://github.com/restic/restic/pull/2859
|
||||||
|
https://forum.restic.net/t/stats-for-a-host-and-filtered-snapshots/3020
|
||||||
|
|
||||||
|
* Enhancement #323: Add command for copying snapshots between repositories
|
||||||
|
|
||||||
|
We've added a copy command, allowing you to copy snapshots from one repository to another.
|
||||||
|
|
||||||
|
Note that this process will have to read (download) and write (upload) the entire snapshot(s)
|
||||||
|
due to the different encryption keys used on the source and destination repository. Also, the
|
||||||
|
transferred files are not re-chunked, which may break deduplication between files already
|
||||||
|
stored in the destination repo and files copied there using this command.
|
||||||
|
|
||||||
|
To fully support deduplication between repositories when the copy command is used, the init
|
||||||
|
command now supports the `--copy-chunker-params` option, which initializes the new
|
||||||
|
repository with identical parameters for splitting files into chunks as an already existing
|
||||||
|
repository. This allows copied snapshots to be equally deduplicated in both repositories.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/323
|
||||||
|
https://github.com/restic/restic/pull/2606
|
||||||
|
https://github.com/restic/restic/pull/2928
|
||||||
|
|
||||||
|
* Enhancement #551: Use optimized library for hash calculation of file chunks
|
||||||
|
|
||||||
|
We've switched the library used to calculate the hashes of file chunks, which are used for
|
||||||
|
deduplication, to the optimized Minio SHA-256 implementation.
|
||||||
|
|
||||||
|
Depending on the CPU it improves the hashing throughput by 10-30%. Modern x86 CPUs with the SHA
|
||||||
|
Extension should be about two to three times faster.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/551
|
||||||
|
https://github.com/restic/restic/pull/2709
|
||||||
|
|
||||||
|
* Enhancement #2195: Simplify and improve restore performance
|
||||||
|
|
||||||
|
Significantly improves restore performance of large files (i.e. 50M+):
|
||||||
|
https://github.com/restic/restic/issues/2074
|
||||||
|
https://forum.restic.net/t/restore-using-rclone-gdrive-backend-is-slow/1112/8
|
||||||
|
https://forum.restic.net/t/degraded-restore-performance-s3-backend/1400
|
||||||
|
|
||||||
|
Fixes "not enough cache capacity" error during restore:
|
||||||
|
https://github.com/restic/restic/issues/2244
|
||||||
|
|
||||||
|
NOTE: This new implementation does not guarantee order in which blobs are written to the target
|
||||||
|
files and, for example, the last blob of a file can be written to the file before any of the
|
||||||
|
preceeding file blobs. It is therefore possible to have gaps in the data written to the target
|
||||||
|
files if restore fails or interrupted by the user.
|
||||||
|
|
||||||
|
The implementation will try to preallocate space for the restored files on the filesystem to
|
||||||
|
prevent file fragmentation. This ensures good read performance for large files, like for
|
||||||
|
example VM images. If preallocating space is not supported by the filesystem, then this step is
|
||||||
|
silently skipped.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2195
|
||||||
|
https://github.com/restic/restic/pull/2893
|
||||||
|
|
||||||
|
* Enhancement #2328: Improve speed of check command
|
||||||
|
|
||||||
|
We've improved the check command to traverse trees only once independent of whether they are
|
||||||
|
contained in multiple snapshots. The check command is now much faster for repositories with a
|
||||||
|
large number of snapshots.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2284
|
||||||
|
https://github.com/restic/restic/pull/2328
|
||||||
|
|
||||||
|
* Enhancement #2423: Support user@domain parsing as user
|
||||||
|
|
||||||
|
Added the ability for user@domain-like users to be authenticated over SFTP servers.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2423
|
||||||
|
|
||||||
|
* Enhancement #2576: Improve the chunking algorithm
|
||||||
|
|
||||||
|
We've updated the chunker library responsible for splitting files into smaller blocks. It
|
||||||
|
should improve the chunking throughput by 5-15% depending on the CPU.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2820
|
||||||
|
https://github.com/restic/restic/pull/2576
|
||||||
|
https://github.com/restic/restic/pull/2845
|
||||||
|
|
||||||
|
* Enhancement #2598: Improve speed of diff command
|
||||||
|
|
||||||
|
We've improved the performance of the diff command when comparing snapshots with similar
|
||||||
|
content. It should run up to twice as fast as before.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2598
|
||||||
|
|
||||||
|
* Enhancement #2599: Slightly reduce memory usage of prune and stats commands
|
||||||
|
|
||||||
|
The prune and the stats command kept directory identifiers in memory twice while searching for
|
||||||
|
used blobs.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2599
|
||||||
|
|
||||||
|
* Enhancement #2733: S3 backend: Add support for WebIdentityTokenFile
|
||||||
|
|
||||||
|
We've added support for EKS IAM roles for service accounts feature to the S3 backend.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2703
|
||||||
|
https://github.com/restic/restic/pull/2733
|
||||||
|
|
||||||
|
* Enhancement #2773: Optimize handling of new index entries
|
||||||
|
|
||||||
|
Restic now uses less memory for backups which add a lot of data, e.g. large initial backups. In
|
||||||
|
addition, we've improved the stability in some edge cases.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2773
|
||||||
|
|
||||||
|
* Enhancement #2781: Reduce memory consumption of in-memory index
|
||||||
|
|
||||||
|
We've improved how the index is stored in memory. This change can reduce memory usage for large
|
||||||
|
repositories by up to 50% (depending on the operation).
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2781
|
||||||
|
https://github.com/restic/restic/pull/2812
|
||||||
|
|
||||||
|
* Enhancement #2786: Optimize `list blobs` command
|
||||||
|
|
||||||
|
We've changed the implementation of `list blobs` which should be now a bit faster and consume
|
||||||
|
almost no memory even for large repositories.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2786
|
||||||
|
|
||||||
|
* Enhancement #2790: Optimized file access in restic mount
|
||||||
|
|
||||||
|
Reading large (> 100GiB) files from restic mountpoints is now faster, and the speedup is
|
||||||
|
greater for larger files.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2790
|
||||||
|
|
||||||
|
* Enhancement #2840: Speed-up file deletion in forget, prune and rebuild-index
|
||||||
|
|
||||||
|
We've sped up the file deletion for the commands forget, prune and rebuild-index, especially
|
||||||
|
for remote repositories. Deletion was sequential before and is now run in parallel.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2840
|
||||||
|
|
||||||
|
|
||||||
Changelog for restic 0.9.6 (2019-11-22)
|
Changelog for restic 0.9.6 (2019-11-22)
|
||||||
=======================================
|
=======================================
|
||||||
|
|
||||||
@@ -1361,10 +1825,10 @@ Details
|
|||||||
|
|
||||||
Exploiting the vulnerability requires a Linux/Unix system which saves backups via restic and
|
Exploiting the vulnerability requires a Linux/Unix system which saves backups via restic and
|
||||||
a Windows systems which restores files from the repo. In addition, the attackers need to be able
|
a Windows systems which restores files from the repo. In addition, the attackers need to be able
|
||||||
to create create files with arbitrary names which are then saved to the restic repo. For
|
to create files with arbitrary names which are then saved to the restic repo. For example, by
|
||||||
example, by creating a file named "..\test.txt" (which is a perfectly legal filename on Linux)
|
creating a file named "..\test.txt" (which is a perfectly legal filename on Linux) and
|
||||||
and restoring a snapshot containing this file on Windows, it would be written to the parent of
|
restoring a snapshot containing this file on Windows, it would be written to the parent of the
|
||||||
the target directory.
|
target directory.
|
||||||
|
|
||||||
We'd like to thank Tyler Spivey for reporting this responsibly!
|
We'd like to thank Tyler Spivey for reporting this responsibly!
|
||||||
|
|
||||||
|
|||||||
@@ -60,16 +60,11 @@ uploading it somewhere or post only the parts that are really relevant.
|
|||||||
Development Environment
|
Development Environment
|
||||||
=======================
|
=======================
|
||||||
|
|
||||||
The repository contains several sets of directories with code: `cmd/` and
|
The repository contains the code written for restic in the directories
|
||||||
`internal/` contain the code written for restic, whereas `vendor/` contains
|
`cmd/` and `internal/`.
|
||||||
copies of libraries restic depends on. The libraries are managed with the
|
|
||||||
command `go mod vendor`.
|
|
||||||
|
|
||||||
Go >= 1.11
|
Restic requires Go version 1.12 or later for compiling. Clone the repo (without
|
||||||
----------
|
having `$GOPATH` set) and `cd` into the directory:
|
||||||
|
|
||||||
For Go version 1.11 or later, you should clone the repo (without having
|
|
||||||
`$GOPATH` set) and `cd` into the directory:
|
|
||||||
|
|
||||||
$ unset GOPATH
|
$ unset GOPATH
|
||||||
$ git clone https://github.com/restic/restic
|
$ git clone https://github.com/restic/restic
|
||||||
@@ -79,40 +74,12 @@ Then use the `go` tool to build restic:
|
|||||||
|
|
||||||
$ go build ./cmd/restic
|
$ go build ./cmd/restic
|
||||||
$ ./restic version
|
$ ./restic version
|
||||||
restic 0.9.2-dev (compiled manually) compiled with go1.11 on linux/amd64
|
restic 0.9.6-dev (compiled manually) compiled with go1.14 on linux/amd64
|
||||||
|
|
||||||
You can run all tests with the following command:
|
You can run all tests with the following command:
|
||||||
|
|
||||||
$ go test ./...
|
$ go test ./...
|
||||||
|
|
||||||
Go < 1.11
|
|
||||||
---------
|
|
||||||
|
|
||||||
In order to compile restic with Go before 1.11, it needs to be checked out at
|
|
||||||
the right path within a `GOPATH`. The concept of a `GOPATH` is explained in
|
|
||||||
["How to write Go code"](https://golang.org/doc/code.html).
|
|
||||||
|
|
||||||
If you do not have a directory with Go code yet, executing the following
|
|
||||||
instructions in your shell will create one for you and check out the restic
|
|
||||||
repo:
|
|
||||||
|
|
||||||
$ export GOPATH="$HOME/go"
|
|
||||||
$ mkdir -p "$GOPATH/src/github.com/restic"
|
|
||||||
$ cd "$GOPATH/src/github.com/restic"
|
|
||||||
$ git clone https://github.com/restic/restic
|
|
||||||
$ cd restic
|
|
||||||
|
|
||||||
You can then build restic as follows:
|
|
||||||
|
|
||||||
$ go build ./cmd/restic
|
|
||||||
$ ./restic version
|
|
||||||
restic compiled manually
|
|
||||||
compiled with go1.8.3 on linux/amd64
|
|
||||||
|
|
||||||
The following commands can be used to run all the tests:
|
|
||||||
|
|
||||||
$ go test ./...
|
|
||||||
|
|
||||||
Providing Patches
|
Providing Patches
|
||||||
=================
|
=================
|
||||||
|
|
||||||
@@ -125,15 +92,14 @@ down to the following steps:
|
|||||||
GitHub. For a new feature, please add an issue before starting to work on
|
GitHub. For a new feature, please add an issue before starting to work on
|
||||||
it, so that duplicate work is prevented.
|
it, so that duplicate work is prevented.
|
||||||
|
|
||||||
1. First we would kindly ask you to fork our project on GitHub if you haven't
|
1. Next, fork our project on GitHub if you haven't done so already.
|
||||||
done so already.
|
|
||||||
|
|
||||||
2. Clone the repository locally and create a new branch. If you are working on
|
2. Clone your fork of the repository locally and **create a new branch** for
|
||||||
the code itself, please set up the development environment as described in
|
your changes. If you are working on the code itself, please set up the
|
||||||
the previous section.
|
development environment as described in the previous section.
|
||||||
|
|
||||||
3. Then commit your changes as fine grained as possible, as smaller patches,
|
3. Commit your changes to the new branch as fine grained as possible, as
|
||||||
that handle one and only one issue are easier to discuss and merge.
|
smaller patches, for individual changes, are easier to discuss and merge.
|
||||||
|
|
||||||
4. Push the new branch with your changes to your fork of the repository.
|
4. Push the new branch with your changes to your fork of the repository.
|
||||||
|
|
||||||
@@ -146,20 +112,19 @@ down to the following steps:
|
|||||||
existing commit, use common sense to decide which is better), they will be
|
existing commit, use common sense to decide which is better), they will be
|
||||||
automatically added to the pull request.
|
automatically added to the pull request.
|
||||||
|
|
||||||
7. If your pull request changes anything that users should be aware
|
7. If your pull request changes anything that users should be aware of
|
||||||
of (a bugfix, a new feature, ...) please add an entry as a new
|
(a bugfix, a new feature, ...) please add an entry as a new file in
|
||||||
file in `changelog/unreleased` including the issue number in the
|
`changelog/unreleased` including the issue number in the filename (e.g.
|
||||||
filename (e.g. `issue-8756`). Use the template in
|
`issue-8756`). Use the template in `changelog/TEMPLATE` for the content.
|
||||||
`changelog/TEMPLATE` for the content. It will be used in the
|
It will be used in the announcement of the next stable release. While
|
||||||
announcement of the next stable release. While writing, ask
|
writing, ask yourself: If I were the user, what would I need to be aware
|
||||||
yourself: If I were the user, what would I need to be aware of
|
of with this change?
|
||||||
with this change.
|
|
||||||
|
|
||||||
8. Once your code looks good and passes all the tests, we'll merge it. Thanks
|
8. Once your code looks good and passes all the tests, we'll merge it. Thanks
|
||||||
a lot for your contribution!
|
a lot for your contribution!
|
||||||
|
|
||||||
Please provide the patches for each bug or feature in a separate branch and
|
Please provide the patches for each bug or feature in a separate branch and
|
||||||
open up a pull request for each.
|
open up a pull request for each, as this simplifies discussion and merging.
|
||||||
|
|
||||||
The restic project uses the `gofmt` tool for Go source indentation, so please
|
The restic project uses the `gofmt` tool for Go source indentation, so please
|
||||||
run
|
run
|
||||||
|
|||||||
@@ -20,8 +20,8 @@ init:
|
|||||||
|
|
||||||
install:
|
install:
|
||||||
- rmdir c:\go /s /q
|
- rmdir c:\go /s /q
|
||||||
- appveyor DownloadFile https://dl.google.com/go/go1.14.windows-amd64.msi
|
- appveyor DownloadFile https://dl.google.com/go/go1.15.2.windows-amd64.msi
|
||||||
- msiexec /i go1.14.windows-amd64.msi /q
|
- msiexec /i go1.15.2.windows-amd64.msi /q
|
||||||
- go version
|
- go version
|
||||||
- go env
|
- go env
|
||||||
- appveyor DownloadFile http://sourceforge.netcologne.de/project/gnuwin32/tar/1.13-1/tar-1.13-1-bin.zip -FileName tar.zip
|
- appveyor DownloadFile http://sourceforge.netcologne.de/project/gnuwin32/tar/1.13-1/tar-1.13-1-bin.zip -FileName tar.zip
|
||||||
|
|||||||
6
build.go
6
build.go
@@ -3,7 +3,7 @@
|
|||||||
// This program aims to make building Go programs for end users easier by just
|
// This program aims to make building Go programs for end users easier by just
|
||||||
// calling it with `go run`, without having to setup a GOPATH.
|
// calling it with `go run`, without having to setup a GOPATH.
|
||||||
//
|
//
|
||||||
// This program needs Go >= 1.11. It'll use Go modules for compilation. It
|
// This program needs Go >= 1.12. It'll use Go modules for compilation. It
|
||||||
// builds the package configured as Main in the Config struct.
|
// builds the package configured as Main in the Config struct.
|
||||||
|
|
||||||
// BSD 2-Clause License
|
// BSD 2-Clause License
|
||||||
@@ -327,8 +327,8 @@ func (v GoVersion) String() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
if !goVersion.AtLeast(GoVersion{1, 11, 0}) {
|
if !goVersion.AtLeast(GoVersion{1, 12, 0}) {
|
||||||
die("Go version (%v) is too old, Go <= 1.11 does not support Go Modules\n", goVersion)
|
die("Go version (%v) is too old, restic requires Go >= 1.12\n", goVersion)
|
||||||
}
|
}
|
||||||
|
|
||||||
if !goVersion.AtLeast(config.MinVersion) {
|
if !goVersion.AtLeast(config.MinVersion) {
|
||||||
|
|||||||
8
changelog/0.10.0_2020-09-19/issue-1680
Normal file
8
changelog/0.10.0_2020-09-19/issue-1680
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
Enhancement: Optimize `restic mount`
|
||||||
|
|
||||||
|
We've optimized the FUSE implementation used within restic.
|
||||||
|
`restic mount` is now more responsive and uses less memory.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/1680
|
||||||
|
https://github.com/restic/restic/pull/2587
|
||||||
|
https://github.com/restic/restic/pull/2787
|
||||||
6
changelog/0.10.0_2020-09-19/issue-1863
Normal file
6
changelog/0.10.0_2020-09-19/issue-1863
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
Bugfix: Report correct number of directories processed by backup
|
||||||
|
|
||||||
|
The directory statistics calculation was fixed to report the actual number
|
||||||
|
of processed directories instead of always zero.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/1863
|
||||||
9
changelog/0.10.0_2020-09-19/issue-2175
Normal file
9
changelog/0.10.0_2020-09-19/issue-2175
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
Enhancement: Allow specifying user and host when creating keys
|
||||||
|
|
||||||
|
When adding a new key to the repository, the username and hostname for the new
|
||||||
|
key can be specified on the command line. This allows overriding the defaults,
|
||||||
|
for example if you would prefer to use the FQDN to identify the host or if you
|
||||||
|
want to add keys for several different hosts without having to run the key add
|
||||||
|
command on those hosts.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2175
|
||||||
9
changelog/0.10.0_2020-09-19/issue-2254
Normal file
9
changelog/0.10.0_2020-09-19/issue-2254
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
Bugfix: Fix tar issues when dumping `/`
|
||||||
|
|
||||||
|
We've fixed an issue with dumping either `/` or files on the first sublevel
|
||||||
|
e.g. `/foo` to tar. This also fixes tar dumping issues on Windows where this
|
||||||
|
issue could also happen.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2254
|
||||||
|
https://github.com/restic/restic/issues/2357
|
||||||
|
https://github.com/restic/restic/pull/2255
|
||||||
12
changelog/0.10.0_2020-09-19/issue-2395
Normal file
12
changelog/0.10.0_2020-09-19/issue-2395
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
Enhancement: Ignore sync errors when operation not supported by local filesystem
|
||||||
|
|
||||||
|
The local backend has been modified to work with filesystems which doesn't support
|
||||||
|
the `sync` operation. This operation is normally used by restic to ensure that data
|
||||||
|
files are fully written to disk before continuing.
|
||||||
|
|
||||||
|
For these limited filesystems, saving a file in the backend would previously fail with
|
||||||
|
an "operation not supported" error. This error is now ignored, which means that e.g.
|
||||||
|
an SMB mount on macOS can now be used as storage location for a repository.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2395
|
||||||
|
https://forum.restic.net/t/sync-errors-on-mac-over-smb/1859
|
||||||
7
changelog/0.10.0_2020-09-19/issue-2427
Normal file
7
changelog/0.10.0_2020-09-19/issue-2427
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
Enhancement: Add flag `--iexclude-file` to backup command
|
||||||
|
|
||||||
|
The backup command now supports the flag `--iexclude-file` which is a
|
||||||
|
case-insensitive version of `--exclude-file`.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2427
|
||||||
|
https://github.com/restic/restic/pull/2898
|
||||||
8
changelog/0.10.0_2020-09-19/issue-2569
Normal file
8
changelog/0.10.0_2020-09-19/issue-2569
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
Enhancement: Support excluding files by their size
|
||||||
|
|
||||||
|
The `backup` command now supports the `--exclude-larger-than` option to exclude files which are
|
||||||
|
larger than the specified maximum size. This can for example be useful to exclude unimportant
|
||||||
|
files with a large file size.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2569
|
||||||
|
https://github.com/restic/restic/pull/2914
|
||||||
16
changelog/0.10.0_2020-09-19/issue-2571
Normal file
16
changelog/0.10.0_2020-09-19/issue-2571
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
Enhancement: Self-heal missing file parts during backup of unchanged files
|
||||||
|
|
||||||
|
We've improved the resilience of restic to certain types of repository corruption.
|
||||||
|
|
||||||
|
For files that are unchanged since the parent snapshot, the backup command now
|
||||||
|
verifies that all parts of the files still exist in the repository. Parts that are
|
||||||
|
missing, e.g. from a damaged repository, are backed up again. This verification
|
||||||
|
was already run for files that were modified since the parent snapshot, but is
|
||||||
|
now also done for unchanged files.
|
||||||
|
|
||||||
|
Note that restic will not backup file parts that are referenced in the index but
|
||||||
|
where the actual data is not present on disk, as this situation can only be
|
||||||
|
detected by restic check. Please ensure that you run `restic check` regularly.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2571
|
||||||
|
https://github.com/restic/restic/pull/2827
|
||||||
9
changelog/0.10.0_2020-09-19/issue-2858
Normal file
9
changelog/0.10.0_2020-09-19/issue-2858
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
Enhancement: Support filtering snapshots by tag and path in the stats command
|
||||||
|
|
||||||
|
We've added filtering snapshots by `--tag tagList` and by `--path path` to
|
||||||
|
the `stats` command. This includes filtering of only 'latest' snapshots or
|
||||||
|
all snapshots in a repository.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/2858
|
||||||
|
https://github.com/restic/restic/pull/2859
|
||||||
|
https://forum.restic.net/t/stats-for-a-host-and-filtered-snapshots/3020
|
||||||
20
changelog/0.10.0_2020-09-19/issue-323
Normal file
20
changelog/0.10.0_2020-09-19/issue-323
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
Enhancement: Add command for copying snapshots between repositories
|
||||||
|
|
||||||
|
We've added a copy command, allowing you to copy snapshots from one
|
||||||
|
repository to another.
|
||||||
|
|
||||||
|
Note that this process will have to read (download) and write (upload) the
|
||||||
|
entire snapshot(s) due to the different encryption keys used on the source
|
||||||
|
and destination repository. Also, the transferred files are not re-chunked,
|
||||||
|
which may break deduplication between files already stored in the
|
||||||
|
destination repo and files copied there using this command.
|
||||||
|
|
||||||
|
To fully support deduplication between repositories when the copy command is
|
||||||
|
used, the init command now supports the `--copy-chunker-params` option,
|
||||||
|
which initializes the new repository with identical parameters for splitting
|
||||||
|
files into chunks as an already existing repository. This allows copied
|
||||||
|
snapshots to be equally deduplicated in both repositories.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/323
|
||||||
|
https://github.com/restic/restic/pull/2606
|
||||||
|
https://github.com/restic/restic/pull/2928
|
||||||
10
changelog/0.10.0_2020-09-19/issue-551
Normal file
10
changelog/0.10.0_2020-09-19/issue-551
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
Enhancement: Use optimized library for hash calculation of file chunks
|
||||||
|
|
||||||
|
We've switched the library used to calculate the hashes of file chunks, which
|
||||||
|
are used for deduplication, to the optimized Minio SHA-256 implementation.
|
||||||
|
|
||||||
|
Depending on the CPU it improves the hashing throughput by 10-30%. Modern x86
|
||||||
|
CPUs with the SHA Extension should be about two to three times faster.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/551
|
||||||
|
https://github.com/restic/restic/pull/2709
|
||||||
@@ -14,4 +14,10 @@ file can be written to the file before any of the preceeding file blobs.
|
|||||||
It is therefore possible to have gaps in the data written to the target
|
It is therefore possible to have gaps in the data written to the target
|
||||||
files if restore fails or interrupted by the user.
|
files if restore fails or interrupted by the user.
|
||||||
|
|
||||||
|
The implementation will try to preallocate space for the restored files
|
||||||
|
on the filesystem to prevent file fragmentation. This ensures good read
|
||||||
|
performance for large files, like for example VM images. If preallocating
|
||||||
|
space is not supported by the filesystem, then this step is silently skipped.
|
||||||
|
|
||||||
https://github.com/restic/restic/pull/2195
|
https://github.com/restic/restic/pull/2195
|
||||||
|
https://github.com/restic/restic/pull/2893
|
||||||
8
changelog/0.10.0_2020-09-19/pull-2328
Normal file
8
changelog/0.10.0_2020-09-19/pull-2328
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
Enhancement: Improve speed of check command
|
||||||
|
|
||||||
|
We've improved the check command to traverse trees only once independent of
|
||||||
|
whether they are contained in multiple snapshots. The check command is now much
|
||||||
|
faster for repositories with a large number of snapshots.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2328
|
||||||
|
https://github.com/restic/restic/issues/2284
|
||||||
19
changelog/0.10.0_2020-09-19/pull-2546
Normal file
19
changelog/0.10.0_2020-09-19/pull-2546
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
Change: Return exit code 3 when failing to backup all source data
|
||||||
|
|
||||||
|
The backup command used to return a zero exit code as long as a snapshot
|
||||||
|
could be created successfully, even if some of the source files could not
|
||||||
|
be read (in which case the snapshot would contain the rest of the files).
|
||||||
|
|
||||||
|
This made it hard for automation/scripts to detect failures/incomplete
|
||||||
|
backups by looking at the exit code. Restic now returns the following exit
|
||||||
|
codes for the backup command:
|
||||||
|
|
||||||
|
- 0 when the command was successful
|
||||||
|
- 1 when there was a fatal error (no snapshot created)
|
||||||
|
- 3 when some source data could not be read (incomplete snapshot created)
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2546
|
||||||
|
https://github.com/restic/restic/issues/956
|
||||||
|
https://github.com/restic/restic/issues/2064
|
||||||
|
https://github.com/restic/restic/issues/2526
|
||||||
|
https://github.com/restic/restic/issues/2364
|
||||||
@@ -1,7 +1,9 @@
|
|||||||
Enhancement: Improve the chunking algorithm
|
Enhancement: Improve the chunking algorithm
|
||||||
|
|
||||||
We've updated the chunker library responsible for splitting files into smaller
|
We've updated the chunker library responsible for splitting files into smaller
|
||||||
blocks. It should improve the chunking throughput by 5-10% depending on the
|
blocks. It should improve the chunking throughput by 5-15% depending on the
|
||||||
CPU.
|
CPU.
|
||||||
|
|
||||||
https://github.com/restic/restic/pull/2576
|
https://github.com/restic/restic/pull/2576
|
||||||
|
https://github.com/restic/restic/pull/2845
|
||||||
|
https://github.com/restic/restic/issues/2820
|
||||||
6
changelog/0.10.0_2020-09-19/pull-2598
Normal file
6
changelog/0.10.0_2020-09-19/pull-2598
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
Enhancement: Improve speed of diff command
|
||||||
|
|
||||||
|
We've improved the performance of the diff command when comparing snapshots
|
||||||
|
with similar content. It should run up to twice as fast as before.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2598
|
||||||
6
changelog/0.10.0_2020-09-19/pull-2599
Normal file
6
changelog/0.10.0_2020-09-19/pull-2599
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
Enhancement: Slightly reduce memory usage of prune and stats commands
|
||||||
|
|
||||||
|
The prune and the stats command kept directory identifiers in memory twice
|
||||||
|
while searching for used blobs.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2599
|
||||||
14
changelog/0.10.0_2020-09-19/pull-2600
Normal file
14
changelog/0.10.0_2020-09-19/pull-2600
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
Change: Update dependencies, require Go >= 1.13
|
||||||
|
|
||||||
|
Restic now requires Go to be at least 1.13. This allows simplifications in the
|
||||||
|
build process and removing workarounds.
|
||||||
|
|
||||||
|
This is also probably the last version of restic still supporting mounting
|
||||||
|
repositories via fuse on macOS. The library we're using for fuse does not
|
||||||
|
support macOS any more and osxfuse is not open source any more.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2600
|
||||||
|
https://github.com/restic/restic/pull/2852
|
||||||
|
https://github.com/restic/restic/pull/2927
|
||||||
|
https://github.com/bazil/fuse/issues/224
|
||||||
|
https://github.com/osxfuse/osxfuse/issues/590
|
||||||
8
changelog/0.10.0_2020-09-19/pull-2674
Normal file
8
changelog/0.10.0_2020-09-19/pull-2674
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
Bugfix: Add stricter prune error checks
|
||||||
|
|
||||||
|
Additional checks were added to the prune command in order to improve
|
||||||
|
resiliency to backend, hardware and/or networking issues. The checks now
|
||||||
|
detect a few more cases where such outside factors could potentially cause
|
||||||
|
data loss.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2674
|
||||||
6
changelog/0.10.0_2020-09-19/pull-2733
Normal file
6
changelog/0.10.0_2020-09-19/pull-2733
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
Enhancement: S3 backend: Add support for WebIdentityTokenFile
|
||||||
|
|
||||||
|
We've added support for EKS IAM roles for service accounts feature to the S3 backend.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2733
|
||||||
|
https://github.com/restic/restic/issues/2703
|
||||||
6
changelog/0.10.0_2020-09-19/pull-2773
Normal file
6
changelog/0.10.0_2020-09-19/pull-2773
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
Enhancement: Optimize handling of new index entries
|
||||||
|
|
||||||
|
Restic now uses less memory for backups which add a lot of data, e.g. large initial backups.
|
||||||
|
In addition, we've improved the stability in some edge cases.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2773
|
||||||
8
changelog/0.10.0_2020-09-19/pull-2781
Normal file
8
changelog/0.10.0_2020-09-19/pull-2781
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
Enhancement: Reduce memory consumption of in-memory index
|
||||||
|
|
||||||
|
We've improved how the index is stored in memory.
|
||||||
|
This change can reduce memory usage for large repositories by up to 50%
|
||||||
|
(depending on the operation).
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2781
|
||||||
|
https://github.com/restic/restic/pull/2812
|
||||||
6
changelog/0.10.0_2020-09-19/pull-2786
Normal file
6
changelog/0.10.0_2020-09-19/pull-2786
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
Enhancement: Optimize `list blobs` command
|
||||||
|
|
||||||
|
We've changed the implementation of `list blobs` which should be now a bit faster
|
||||||
|
and consume almost no memory even for large repositories.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2786
|
||||||
6
changelog/0.10.0_2020-09-19/pull-2790
Normal file
6
changelog/0.10.0_2020-09-19/pull-2790
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
Enhancement: Optimized file access in restic mount
|
||||||
|
|
||||||
|
Reading large (> 100GiB) files from restic mountpoints is now faster,
|
||||||
|
and the speedup is greater for larger files.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2790
|
||||||
8
changelog/0.10.0_2020-09-19/pull-2821
Normal file
8
changelog/0.10.0_2020-09-19/pull-2821
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
Change: Honor the --no-lock flag in the mount command
|
||||||
|
|
||||||
|
The mount command now does not lock the repository if given the
|
||||||
|
--no-lock flag. This allows to mount repositories which are archived
|
||||||
|
on a read only backend/filesystem.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/issues/1597
|
||||||
|
https://github.com/restic/restic/pull/2821
|
||||||
7
changelog/0.10.0_2020-09-19/pull-2840
Normal file
7
changelog/0.10.0_2020-09-19/pull-2840
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
Enhancement: Speed-up file deletion in forget, prune and rebuild-index
|
||||||
|
|
||||||
|
We've sped up the file deletion for the commands forget, prune and
|
||||||
|
rebuild-index, especially for remote repositories.
|
||||||
|
Deletion was sequential before and is now run in parallel.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2840
|
||||||
9
changelog/0.10.0_2020-09-19/pull-2899
Normal file
9
changelog/0.10.0_2020-09-19/pull-2899
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
Bugfix: Fix possible crash in the progress bar of check --read-data
|
||||||
|
|
||||||
|
We've fixed a possible crash while displaying the progress bar for the
|
||||||
|
check --read-data command. The crash occurred when the length of the
|
||||||
|
progress bar status exceeded the terminal width, which only happened for
|
||||||
|
very narrow terminal windows.
|
||||||
|
|
||||||
|
https://github.com/restic/restic/pull/2899
|
||||||
|
https://forum.restic.net/t/restic-rclone-pcloud-connection-issues/2963/15
|
||||||
@@ -7,7 +7,7 @@ vulnerability, but urge all users to upgrade to the latest version of restic.
|
|||||||
|
|
||||||
Exploiting the vulnerability requires a Linux/Unix system which saves backups
|
Exploiting the vulnerability requires a Linux/Unix system which saves backups
|
||||||
via restic and a Windows systems which restores files from the repo. In
|
via restic and a Windows systems which restores files from the repo. In
|
||||||
addition, the attackers need to be able to create create files with arbitrary
|
addition, the attackers need to be able to create files with arbitrary
|
||||||
names which are then saved to the restic repo. For example, by creating a file
|
names which are then saved to the restic repo. For example, by creating a file
|
||||||
named "..\test.txt" (which is a perfectly legal filename on Linux) and
|
named "..\test.txt" (which is a perfectly legal filename on Linux) and
|
||||||
restoring a snapshot containing this file on Windows, it would be written to
|
restoring a snapshot containing this file on Windows, it would be written to
|
||||||
|
|||||||
@@ -1,6 +0,0 @@
|
|||||||
Change: Require Go >= 1.11
|
|
||||||
|
|
||||||
Restic now requires Go to be at least 1.11. This allows simplifications in the
|
|
||||||
build process and removing workarounds.
|
|
||||||
|
|
||||||
https://github.com/restic/restic/pull/2600
|
|
||||||
@@ -1,7 +1,6 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
|
||||||
"os"
|
"os"
|
||||||
"os/signal"
|
"os/signal"
|
||||||
"sync"
|
"sync"
|
||||||
@@ -17,8 +16,6 @@ var cleanupHandlers struct {
|
|||||||
ch chan os.Signal
|
ch chan os.Signal
|
||||||
}
|
}
|
||||||
|
|
||||||
var stderr = os.Stderr
|
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
cleanupHandlers.ch = make(chan os.Signal, 1)
|
cleanupHandlers.ch = make(chan os.Signal, 1)
|
||||||
go CleanupHandler(cleanupHandlers.ch)
|
go CleanupHandler(cleanupHandlers.ch)
|
||||||
@@ -51,7 +48,7 @@ func RunCleanupHandlers() {
|
|||||||
for _, f := range cleanupHandlers.list {
|
for _, f := range cleanupHandlers.list {
|
||||||
err := f()
|
err := f()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Fprintf(stderr, "error in cleanup handler: %v\n", err)
|
Warnf("error in cleanup handler: %v\n", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
cleanupHandlers.list = nil
|
cleanupHandlers.list = nil
|
||||||
@@ -61,7 +58,7 @@ func RunCleanupHandlers() {
|
|||||||
func CleanupHandler(c <-chan os.Signal) {
|
func CleanupHandler(c <-chan os.Signal) {
|
||||||
for s := range c {
|
for s := range c {
|
||||||
debug.Log("signal %v received, cleaning up", s)
|
debug.Log("signal %v received, cleaning up", s)
|
||||||
fmt.Fprintf(stderr, "%ssignal %v received, cleaning up\n", ClearLine(), s)
|
Warnf("%ssignal %v received, cleaning up\n", ClearLine(), s)
|
||||||
|
|
||||||
code := 0
|
code := 0
|
||||||
|
|
||||||
|
|||||||
@@ -39,10 +39,9 @@ given as the arguments.
|
|||||||
EXIT STATUS
|
EXIT STATUS
|
||||||
===========
|
===========
|
||||||
|
|
||||||
Exit status is 0 if the command was successful, and non-zero if there was any error.
|
Exit status is 0 if the command was successful.
|
||||||
|
Exit status is 1 if there was a fatal error (no snapshot created).
|
||||||
Note that some issues such as unreadable or deleted files during backup
|
Exit status is 3 if some source data could not be read (incomplete snapshot created).
|
||||||
currently doesn't result in a non-zero error exit status.
|
|
||||||
`,
|
`,
|
||||||
PreRun: func(cmd *cobra.Command, args []string) {
|
PreRun: func(cmd *cobra.Command, args []string) {
|
||||||
if backupOptions.Host == "" {
|
if backupOptions.Host == "" {
|
||||||
@@ -79,26 +78,31 @@ currently doesn't result in a non-zero error exit status.
|
|||||||
|
|
||||||
// BackupOptions bundles all options for the backup command.
|
// BackupOptions bundles all options for the backup command.
|
||||||
type BackupOptions struct {
|
type BackupOptions struct {
|
||||||
Parent string
|
Parent string
|
||||||
Force bool
|
Force bool
|
||||||
Excludes []string
|
Excludes []string
|
||||||
InsensitiveExcludes []string
|
InsensitiveExcludes []string
|
||||||
ExcludeFiles []string
|
ExcludeFiles []string
|
||||||
ExcludeOtherFS bool
|
InsensitiveExcludeFiles []string
|
||||||
ExcludeIfPresent []string
|
ExcludeOtherFS bool
|
||||||
ExcludeCaches bool
|
ExcludeIfPresent []string
|
||||||
Stdin bool
|
ExcludeCaches bool
|
||||||
StdinFilename string
|
ExcludeLargerThan string
|
||||||
Tags []string
|
Stdin bool
|
||||||
Host string
|
StdinFilename string
|
||||||
FilesFrom []string
|
Tags []string
|
||||||
TimeStamp string
|
Host string
|
||||||
WithAtime bool
|
FilesFrom []string
|
||||||
IgnoreInode bool
|
TimeStamp string
|
||||||
|
WithAtime bool
|
||||||
|
IgnoreInode bool
|
||||||
}
|
}
|
||||||
|
|
||||||
var backupOptions BackupOptions
|
var backupOptions BackupOptions
|
||||||
|
|
||||||
|
// ErrInvalidSourceData is used to report an incomplete backup
|
||||||
|
var ErrInvalidSourceData = errors.New("failed to read all source data during backup")
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
cmdRoot.AddCommand(cmdBackup)
|
cmdRoot.AddCommand(cmdBackup)
|
||||||
|
|
||||||
@@ -108,9 +112,11 @@ func init() {
|
|||||||
f.StringArrayVarP(&backupOptions.Excludes, "exclude", "e", nil, "exclude a `pattern` (can be specified multiple times)")
|
f.StringArrayVarP(&backupOptions.Excludes, "exclude", "e", nil, "exclude a `pattern` (can be specified multiple times)")
|
||||||
f.StringArrayVar(&backupOptions.InsensitiveExcludes, "iexclude", nil, "same as --exclude `pattern` but ignores the casing of filenames")
|
f.StringArrayVar(&backupOptions.InsensitiveExcludes, "iexclude", nil, "same as --exclude `pattern` but ignores the casing of filenames")
|
||||||
f.StringArrayVar(&backupOptions.ExcludeFiles, "exclude-file", nil, "read exclude patterns from a `file` (can be specified multiple times)")
|
f.StringArrayVar(&backupOptions.ExcludeFiles, "exclude-file", nil, "read exclude patterns from a `file` (can be specified multiple times)")
|
||||||
|
f.StringArrayVar(&backupOptions.InsensitiveExcludeFiles, "iexclude-file", nil, "same as --exclude-file but ignores casing of `file`names in patterns")
|
||||||
f.BoolVarP(&backupOptions.ExcludeOtherFS, "one-file-system", "x", false, "exclude other file systems")
|
f.BoolVarP(&backupOptions.ExcludeOtherFS, "one-file-system", "x", false, "exclude other file systems")
|
||||||
f.StringArrayVar(&backupOptions.ExcludeIfPresent, "exclude-if-present", nil, "takes `filename[:header]`, exclude contents of directories containing filename (except filename itself) if header of that file is as provided (can be specified multiple times)")
|
f.StringArrayVar(&backupOptions.ExcludeIfPresent, "exclude-if-present", nil, "takes `filename[:header]`, exclude contents of directories containing filename (except filename itself) if header of that file is as provided (can be specified multiple times)")
|
||||||
f.BoolVar(&backupOptions.ExcludeCaches, "exclude-caches", false, `excludes cache directories that are marked with a CACHEDIR.TAG file. See http://bford.info/cachedir/spec.html for the Cache Directory Tagging Standard`)
|
f.BoolVar(&backupOptions.ExcludeCaches, "exclude-caches", false, `excludes cache directories that are marked with a CACHEDIR.TAG file. See https://bford.info/cachedir/ for the Cache Directory Tagging Standard`)
|
||||||
|
f.StringVar(&backupOptions.ExcludeLargerThan, "exclude-larger-than", "", "max `size` of the files to be backed up (allowed suffixes: k/K, m/M, g/G, t/T)")
|
||||||
f.BoolVar(&backupOptions.Stdin, "stdin", false, "read backup from stdin")
|
f.BoolVar(&backupOptions.Stdin, "stdin", false, "read backup from stdin")
|
||||||
f.StringVar(&backupOptions.StdinFilename, "stdin-filename", "stdin", "`filename` to use when reading from stdin")
|
f.StringVar(&backupOptions.StdinFilename, "stdin-filename", "stdin", "`filename` to use when reading from stdin")
|
||||||
f.StringArrayVar(&backupOptions.Tags, "tag", nil, "add a `tag` for the new snapshot (can be specified multiple times)")
|
f.StringArrayVar(&backupOptions.Tags, "tag", nil, "add a `tag` for the new snapshot (can be specified multiple times)")
|
||||||
@@ -237,6 +243,14 @@ func collectRejectByNameFuncs(opts BackupOptions, repo *repository.Repository, t
|
|||||||
opts.Excludes = append(opts.Excludes, excludes...)
|
opts.Excludes = append(opts.Excludes, excludes...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(opts.InsensitiveExcludeFiles) > 0 {
|
||||||
|
excludes, err := readExcludePatternsFromFiles(opts.InsensitiveExcludeFiles)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
opts.InsensitiveExcludes = append(opts.InsensitiveExcludes, excludes...)
|
||||||
|
}
|
||||||
|
|
||||||
if len(opts.InsensitiveExcludes) > 0 {
|
if len(opts.InsensitiveExcludes) > 0 {
|
||||||
fs = append(fs, rejectByInsensitivePattern(opts.InsensitiveExcludes))
|
fs = append(fs, rejectByInsensitivePattern(opts.InsensitiveExcludes))
|
||||||
}
|
}
|
||||||
@@ -273,6 +287,14 @@ func collectRejectFuncs(opts BackupOptions, repo *repository.Repository, targets
|
|||||||
fs = append(fs, f)
|
fs = append(fs, f)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(opts.ExcludeLargerThan) != 0 && !opts.Stdin {
|
||||||
|
f, err := rejectBySize(opts.ExcludeLargerThan)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
fs = append(fs, f)
|
||||||
|
}
|
||||||
|
|
||||||
return fs, nil
|
return fs, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -415,7 +437,7 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
|||||||
var t tomb.Tomb
|
var t tomb.Tomb
|
||||||
|
|
||||||
if gopts.verbosity >= 2 && !gopts.JSON {
|
if gopts.verbosity >= 2 && !gopts.JSON {
|
||||||
term.Print("open repository\n")
|
Verbosef("open repository\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
repo, err := OpenRepository(gopts)
|
repo, err := OpenRepository(gopts)
|
||||||
@@ -557,7 +579,11 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
|||||||
arch.SelectByName = selectByNameFilter
|
arch.SelectByName = selectByNameFilter
|
||||||
arch.Select = selectFilter
|
arch.Select = selectFilter
|
||||||
arch.WithAtime = opts.WithAtime
|
arch.WithAtime = opts.WithAtime
|
||||||
arch.Error = p.Error
|
success := true
|
||||||
|
arch.Error = func(item string, fi os.FileInfo, err error) error {
|
||||||
|
success = false
|
||||||
|
return p.Error(item, fi, err)
|
||||||
|
}
|
||||||
arch.CompleteItem = p.CompleteItem
|
arch.CompleteItem = p.CompleteItem
|
||||||
arch.StartFile = p.StartFile
|
arch.StartFile = p.StartFile
|
||||||
arch.CompleteBlob = p.CompleteBlob
|
arch.CompleteBlob = p.CompleteBlob
|
||||||
@@ -575,24 +601,6 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
|||||||
ParentSnapshot: *parentSnapshotID,
|
ParentSnapshot: *parentSnapshotID,
|
||||||
}
|
}
|
||||||
|
|
||||||
uploader := archiver.IndexUploader{
|
|
||||||
Repository: repo,
|
|
||||||
Start: func() {
|
|
||||||
if !gopts.JSON {
|
|
||||||
p.VV("uploading intermediate index")
|
|
||||||
}
|
|
||||||
},
|
|
||||||
Complete: func(id restic.ID) {
|
|
||||||
if !gopts.JSON {
|
|
||||||
p.V("uploaded intermediate index %v", id.Str())
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
t.Go(func() error {
|
|
||||||
return uploader.Upload(gopts.ctx, t.Context(gopts.ctx), 30*time.Second)
|
|
||||||
})
|
|
||||||
|
|
||||||
if !gopts.JSON {
|
if !gopts.JSON {
|
||||||
p.V("start backup on %v", targets)
|
p.V("start backup on %v", targets)
|
||||||
}
|
}
|
||||||
@@ -612,6 +620,9 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
|||||||
if !gopts.JSON {
|
if !gopts.JSON {
|
||||||
p.P("snapshot %s saved\n", id.Str())
|
p.P("snapshot %s saved\n", id.Str())
|
||||||
}
|
}
|
||||||
|
if !success {
|
||||||
|
return ErrInvalidSourceData
|
||||||
|
}
|
||||||
|
|
||||||
// Return error if any
|
// Return error if any
|
||||||
return err
|
return err
|
||||||
|
|||||||
@@ -2,8 +2,6 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
|
||||||
@@ -76,7 +74,7 @@ func runCat(gopts GlobalOptions, args []string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Println(string(buf))
|
Println(string(buf))
|
||||||
return nil
|
return nil
|
||||||
case "index":
|
case "index":
|
||||||
buf, err := repo.LoadAndDecrypt(gopts.ctx, nil, restic.IndexFile, id)
|
buf, err := repo.LoadAndDecrypt(gopts.ctx, nil, restic.IndexFile, id)
|
||||||
@@ -84,9 +82,8 @@ func runCat(gopts GlobalOptions, args []string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err = os.Stdout.Write(append(buf, '\n'))
|
Println(string(buf))
|
||||||
return err
|
return nil
|
||||||
|
|
||||||
case "snapshot":
|
case "snapshot":
|
||||||
sn := &restic.Snapshot{}
|
sn := &restic.Snapshot{}
|
||||||
err = repo.LoadJSONUnpacked(gopts.ctx, restic.SnapshotFile, id, sn)
|
err = repo.LoadJSONUnpacked(gopts.ctx, restic.SnapshotFile, id, sn)
|
||||||
@@ -99,8 +96,7 @@ func runCat(gopts GlobalOptions, args []string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Println(string(buf))
|
Println(string(buf))
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
case "key":
|
case "key":
|
||||||
h := restic.Handle{Type: restic.KeyFile, Name: id.String()}
|
h := restic.Handle{Type: restic.KeyFile, Name: id.String()}
|
||||||
@@ -120,7 +116,7 @@ func runCat(gopts GlobalOptions, args []string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Println(string(buf))
|
Println(string(buf))
|
||||||
return nil
|
return nil
|
||||||
case "masterkey":
|
case "masterkey":
|
||||||
buf, err := json.MarshalIndent(repo.Key(), "", " ")
|
buf, err := json.MarshalIndent(repo.Key(), "", " ")
|
||||||
@@ -128,7 +124,7 @@ func runCat(gopts GlobalOptions, args []string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Println(string(buf))
|
Println(string(buf))
|
||||||
return nil
|
return nil
|
||||||
case "lock":
|
case "lock":
|
||||||
lock, err := restic.LoadLock(gopts.ctx, repo, id)
|
lock, err := restic.LoadLock(gopts.ctx, repo, id)
|
||||||
@@ -141,8 +137,7 @@ func runCat(gopts GlobalOptions, args []string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Println(string(buf))
|
Println(string(buf))
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -154,7 +149,7 @@ func runCat(gopts GlobalOptions, args []string) error {
|
|||||||
|
|
||||||
switch tpe {
|
switch tpe {
|
||||||
case "pack":
|
case "pack":
|
||||||
h := restic.Handle{Type: restic.DataFile, Name: id.String()}
|
h := restic.Handle{Type: restic.PackFile, Name: id.String()}
|
||||||
buf, err := backend.LoadAll(gopts.ctx, nil, repo.Backend(), h)
|
buf, err := backend.LoadAll(gopts.ctx, nil, repo.Backend(), h)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -162,16 +157,15 @@ func runCat(gopts GlobalOptions, args []string) error {
|
|||||||
|
|
||||||
hash := restic.Hash(buf)
|
hash := restic.Hash(buf)
|
||||||
if !hash.Equal(id) {
|
if !hash.Equal(id) {
|
||||||
fmt.Fprintf(stderr, "Warning: hash of data does not match ID, want\n %v\ngot:\n %v\n", id.String(), hash.String())
|
Warnf("Warning: hash of data does not match ID, want\n %v\ngot:\n %v\n", id.String(), hash.String())
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err = os.Stdout.Write(buf)
|
_, err = globalOptions.stdout.Write(buf)
|
||||||
return err
|
return err
|
||||||
|
|
||||||
case "blob":
|
case "blob":
|
||||||
for _, t := range []restic.BlobType{restic.DataBlob, restic.TreeBlob} {
|
for _, t := range []restic.BlobType{restic.DataBlob, restic.TreeBlob} {
|
||||||
_, found := repo.Index().Lookup(id, t)
|
if !repo.Index().Has(id, t) {
|
||||||
if !found {
|
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -180,7 +174,7 @@ func runCat(gopts GlobalOptions, args []string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err = os.Stdout.Write(buf)
|
_, err = globalOptions.stdout.Write(buf)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -3,10 +3,8 @@ package main
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"os"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
|
||||||
@@ -100,36 +98,6 @@ func stringToIntSlice(param string) (split []uint, err error) {
|
|||||||
return result, nil
|
return result, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func newReadProgress(gopts GlobalOptions, todo restic.Stat) *restic.Progress {
|
|
||||||
if gopts.Quiet {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
readProgress := restic.NewProgress()
|
|
||||||
|
|
||||||
readProgress.OnUpdate = func(s restic.Stat, d time.Duration, ticker bool) {
|
|
||||||
status := fmt.Sprintf("[%s] %s %d / %d items",
|
|
||||||
formatDuration(d),
|
|
||||||
formatPercent(s.Blobs, todo.Blobs),
|
|
||||||
s.Blobs, todo.Blobs)
|
|
||||||
|
|
||||||
if w := stdoutTerminalWidth(); w > 0 {
|
|
||||||
if len(status) > w {
|
|
||||||
max := w - len(status) - 4
|
|
||||||
status = status[:max] + "... "
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
PrintProgress("%s", status)
|
|
||||||
}
|
|
||||||
|
|
||||||
readProgress.OnDone = func(s restic.Stat, d time.Duration, ticker bool) {
|
|
||||||
fmt.Printf("\nduration: %s\n", formatDuration(d))
|
|
||||||
}
|
|
||||||
|
|
||||||
return readProgress
|
|
||||||
}
|
|
||||||
|
|
||||||
// prepareCheckCache configures a special cache directory for check.
|
// prepareCheckCache configures a special cache directory for check.
|
||||||
//
|
//
|
||||||
// * if --with-cache is specified, the default cache is used
|
// * if --with-cache is specified, the default cache is used
|
||||||
@@ -235,7 +203,7 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
errorsFound = true
|
errorsFound = true
|
||||||
fmt.Fprintf(os.Stderr, "%v\n", err)
|
Warnf("%v\n", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if orphanedPacks > 0 {
|
if orphanedPacks > 0 {
|
||||||
@@ -249,18 +217,18 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
|
|||||||
for err := range errChan {
|
for err := range errChan {
|
||||||
errorsFound = true
|
errorsFound = true
|
||||||
if e, ok := err.(checker.TreeError); ok {
|
if e, ok := err.(checker.TreeError); ok {
|
||||||
fmt.Fprintf(os.Stderr, "error for tree %v:\n", e.ID.Str())
|
Warnf("error for tree %v:\n", e.ID.Str())
|
||||||
for _, treeErr := range e.Errors {
|
for _, treeErr := range e.Errors {
|
||||||
fmt.Fprintf(os.Stderr, " %v\n", treeErr)
|
Warnf(" %v\n", treeErr)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
fmt.Fprintf(os.Stderr, "error: %v\n", err)
|
Warnf("error: %v\n", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if opts.CheckUnused {
|
if opts.CheckUnused {
|
||||||
for _, id := range chkr.UnusedBlobs() {
|
for _, id := range chkr.UnusedBlobs() {
|
||||||
Verbosef("unused blob %v\n", id.Str())
|
Verbosef("unused blob %v\n", id)
|
||||||
errorsFound = true
|
errorsFound = true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -282,14 +250,14 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
|
|||||||
Verbosef("read all data\n")
|
Verbosef("read all data\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
p := newReadProgress(gopts, restic.Stat{Blobs: packCount})
|
p := newProgressMax(!gopts.Quiet, packCount, "packs")
|
||||||
errChan := make(chan error)
|
errChan := make(chan error)
|
||||||
|
|
||||||
go chkr.ReadPacks(gopts.ctx, packs, p, errChan)
|
go chkr.ReadPacks(gopts.ctx, packs, p, errChan)
|
||||||
|
|
||||||
for err := range errChan {
|
for err := range errChan {
|
||||||
errorsFound = true
|
errorsFound = true
|
||||||
fmt.Fprintf(os.Stderr, "%v\n", err)
|
Warnf("%v\n", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
233
cmd/restic/cmd_copy.go
Normal file
233
cmd/restic/cmd_copy.go
Normal file
@@ -0,0 +1,233 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"github.com/restic/restic/internal/debug"
|
||||||
|
"github.com/restic/restic/internal/restic"
|
||||||
|
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
var cmdCopy = &cobra.Command{
|
||||||
|
Use: "copy [flags] [snapshotID ...]",
|
||||||
|
Short: "Copy snapshots from one repository to another",
|
||||||
|
Long: `
|
||||||
|
The "copy" command copies one or more snapshots from one repository to another
|
||||||
|
repository. Note that this will have to read (download) and write (upload) the
|
||||||
|
entire snapshot(s) due to the different encryption keys on the source and
|
||||||
|
destination, and that transferred files are not re-chunked, which may break
|
||||||
|
their deduplication. This can be mitigated by the "--copy-chunker-params"
|
||||||
|
option when initializing a new destination repository using the "init" command.
|
||||||
|
`,
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
return runCopy(copyOptions, globalOptions, args)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// CopyOptions bundles all options for the copy command.
|
||||||
|
type CopyOptions struct {
|
||||||
|
secondaryRepoOptions
|
||||||
|
Hosts []string
|
||||||
|
Tags restic.TagLists
|
||||||
|
Paths []string
|
||||||
|
}
|
||||||
|
|
||||||
|
var copyOptions CopyOptions
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
cmdRoot.AddCommand(cmdCopy)
|
||||||
|
|
||||||
|
f := cmdCopy.Flags()
|
||||||
|
initSecondaryRepoOptions(f, ©Options.secondaryRepoOptions, "destination", "to copy snapshots to")
|
||||||
|
f.StringArrayVarP(©Options.Hosts, "host", "H", nil, "only consider snapshots for this `host`, when no snapshot ID is given (can be specified multiple times)")
|
||||||
|
f.Var(©Options.Tags, "tag", "only consider snapshots which include this `taglist`, when no snapshot ID is given")
|
||||||
|
f.StringArrayVar(©Options.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when no snapshot ID is given")
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCopy(opts CopyOptions, gopts GlobalOptions, args []string) error {
|
||||||
|
dstGopts, err := fillSecondaryGlobalOpts(opts.secondaryRepoOptions, gopts, "destination")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, cancel := context.WithCancel(gopts.ctx)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
srcRepo, err := OpenRepository(gopts)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
dstRepo, err := OpenRepository(dstGopts)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
srcLock, err := lockRepo(srcRepo)
|
||||||
|
defer unlockRepo(srcLock)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
dstLock, err := lockRepo(dstRepo)
|
||||||
|
defer unlockRepo(dstLock)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
debug.Log("Loading source index")
|
||||||
|
if err := srcRepo.LoadIndex(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
debug.Log("Loading destination index")
|
||||||
|
if err := dstRepo.LoadIndex(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
dstSnapshotByOriginal := make(map[restic.ID][]*restic.Snapshot)
|
||||||
|
for sn := range FindFilteredSnapshots(ctx, dstRepo, opts.Hosts, opts.Tags, opts.Paths, nil) {
|
||||||
|
if sn.Original != nil && !sn.Original.IsNull() {
|
||||||
|
dstSnapshotByOriginal[*sn.Original] = append(dstSnapshotByOriginal[*sn.Original], sn)
|
||||||
|
}
|
||||||
|
// also consider identical snapshot copies
|
||||||
|
dstSnapshotByOriginal[*sn.ID()] = append(dstSnapshotByOriginal[*sn.ID()], sn)
|
||||||
|
}
|
||||||
|
|
||||||
|
cloner := &treeCloner{
|
||||||
|
srcRepo: srcRepo,
|
||||||
|
dstRepo: dstRepo,
|
||||||
|
visitedTrees: restic.NewIDSet(),
|
||||||
|
buf: nil,
|
||||||
|
}
|
||||||
|
|
||||||
|
for sn := range FindFilteredSnapshots(ctx, srcRepo, opts.Hosts, opts.Tags, opts.Paths, args) {
|
||||||
|
Verbosef("\nsnapshot %s of %v at %s)\n", sn.ID().Str(), sn.Paths, sn.Time)
|
||||||
|
|
||||||
|
// check whether the destination has a snapshot with the same persistent ID which has similar snapshot fields
|
||||||
|
srcOriginal := *sn.ID()
|
||||||
|
if sn.Original != nil {
|
||||||
|
srcOriginal = *sn.Original
|
||||||
|
}
|
||||||
|
if originalSns, ok := dstSnapshotByOriginal[srcOriginal]; ok {
|
||||||
|
isCopy := false
|
||||||
|
for _, originalSn := range originalSns {
|
||||||
|
if similarSnapshots(originalSn, sn) {
|
||||||
|
Verbosef("skipping source snapshot %s, was already copied to snapshot %s\n", sn.ID().Str(), originalSn.ID().Str())
|
||||||
|
isCopy = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if isCopy {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Verbosef(" copy started, this may take a while...\n")
|
||||||
|
|
||||||
|
if err := cloner.copyTree(ctx, *sn.Tree); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
debug.Log("tree copied")
|
||||||
|
|
||||||
|
if err = dstRepo.Flush(ctx); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
debug.Log("flushed packs and saved index")
|
||||||
|
|
||||||
|
// save snapshot
|
||||||
|
sn.Parent = nil // Parent does not have relevance in the new repo.
|
||||||
|
// Use Original as a persistent snapshot ID
|
||||||
|
if sn.Original == nil {
|
||||||
|
sn.Original = sn.ID()
|
||||||
|
}
|
||||||
|
newID, err := dstRepo.SaveJSONUnpacked(ctx, restic.SnapshotFile, sn)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
Verbosef("snapshot %s saved\n", newID.Str())
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func similarSnapshots(sna *restic.Snapshot, snb *restic.Snapshot) bool {
|
||||||
|
// everything except Parent and Original must match
|
||||||
|
if !sna.Time.Equal(snb.Time) || !sna.Tree.Equal(*snb.Tree) || sna.Hostname != snb.Hostname ||
|
||||||
|
sna.Username != snb.Username || sna.UID != snb.UID || sna.GID != snb.GID ||
|
||||||
|
len(sna.Paths) != len(snb.Paths) || len(sna.Excludes) != len(snb.Excludes) ||
|
||||||
|
len(sna.Tags) != len(snb.Tags) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if !sna.HasPaths(snb.Paths) || !sna.HasTags(snb.Tags) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for i, a := range sna.Excludes {
|
||||||
|
if a != snb.Excludes[i] {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
type treeCloner struct {
|
||||||
|
srcRepo restic.Repository
|
||||||
|
dstRepo restic.Repository
|
||||||
|
visitedTrees restic.IDSet
|
||||||
|
buf []byte
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *treeCloner) copyTree(ctx context.Context, treeID restic.ID) error {
|
||||||
|
// We have already processed this tree
|
||||||
|
if t.visitedTrees.Has(treeID) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
tree, err := t.srcRepo.LoadTree(ctx, treeID)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("LoadTree(%v) returned error %v", treeID.Str(), err)
|
||||||
|
}
|
||||||
|
t.visitedTrees.Insert(treeID)
|
||||||
|
|
||||||
|
// Do we already have this tree blob?
|
||||||
|
if !t.dstRepo.Index().Has(treeID, restic.TreeBlob) {
|
||||||
|
newTreeID, err := t.dstRepo.SaveTree(ctx, tree)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("SaveTree(%v) returned error %v", treeID.Str(), err)
|
||||||
|
}
|
||||||
|
// Assurance only.
|
||||||
|
if newTreeID != treeID {
|
||||||
|
return fmt.Errorf("SaveTree(%v) returned unexpected id %s", treeID.Str(), newTreeID.Str())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: parellize this stuff, likely only needed inside a tree.
|
||||||
|
|
||||||
|
for _, entry := range tree.Nodes {
|
||||||
|
// If it is a directory, recurse
|
||||||
|
if entry.Type == "dir" && entry.Subtree != nil {
|
||||||
|
if err := t.copyTree(ctx, *entry.Subtree); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Copy the blobs for this file.
|
||||||
|
for _, blobID := range entry.Content {
|
||||||
|
// Do we already have this data blob?
|
||||||
|
if t.dstRepo.Index().Has(blobID, restic.DataBlob) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
debug.Log("Copying blob %s\n", blobID.Str())
|
||||||
|
t.buf, err = t.srcRepo.LoadBlob(ctx, restic.DataBlob, blobID, t.buf)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("LoadBlob(%v) returned error %v", blobID, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, _, err = t.dstRepo.SaveBlob(ctx, restic.DataBlob, t.buf, blobID, false)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("SaveBlob(%v) returned error %v", blobID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -84,12 +84,12 @@ type Blob struct {
|
|||||||
|
|
||||||
func printPacks(repo *repository.Repository, wr io.Writer) error {
|
func printPacks(repo *repository.Repository, wr io.Writer) error {
|
||||||
|
|
||||||
return repo.List(context.TODO(), restic.DataFile, func(id restic.ID, size int64) error {
|
return repo.List(context.TODO(), restic.PackFile, func(id restic.ID, size int64) error {
|
||||||
h := restic.Handle{Type: restic.DataFile, Name: id.String()}
|
h := restic.Handle{Type: restic.PackFile, Name: id.String()}
|
||||||
|
|
||||||
blobs, err := pack.List(repo.Key(), restic.ReaderAt(repo.Backend(), h), size)
|
blobs, err := pack.List(repo.Key(), restic.ReaderAt(repo.Backend(), h), size)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Fprintf(globalOptions.stderr, "error for pack %v: %v\n", id.Str(), err)
|
Warnf("error for pack %v: %v\n", id.Str(), err)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -112,7 +112,7 @@ func printPacks(repo *repository.Repository, wr io.Writer) error {
|
|||||||
|
|
||||||
func dumpIndexes(repo restic.Repository, wr io.Writer) error {
|
func dumpIndexes(repo restic.Repository, wr io.Writer) error {
|
||||||
return repo.List(context.TODO(), restic.IndexFile, func(id restic.ID, size int64) error {
|
return repo.List(context.TODO(), restic.IndexFile, func(id restic.ID, size int64) error {
|
||||||
fmt.Printf("index_id: %v\n", id)
|
Printf("index_id: %v\n", id)
|
||||||
|
|
||||||
idx, err := repository.LoadIndex(context.TODO(), repo, id)
|
idx, err := repository.LoadIndex(context.TODO(), repo, id)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -151,13 +151,13 @@ func runDebugDump(gopts GlobalOptions, args []string) error {
|
|||||||
case "packs":
|
case "packs":
|
||||||
return printPacks(repo, gopts.stdout)
|
return printPacks(repo, gopts.stdout)
|
||||||
case "all":
|
case "all":
|
||||||
fmt.Printf("snapshots:\n")
|
Printf("snapshots:\n")
|
||||||
err := debugPrintSnapshots(repo, gopts.stdout)
|
err := debugPrintSnapshots(repo, gopts.stdout)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Printf("\nindexes:\n")
|
Printf("\nindexes:\n")
|
||||||
err = dumpIndexes(repo, gopts.stdout)
|
err = dumpIndexes(repo, gopts.stdout)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var cmdDiff = &cobra.Command{
|
var cmdDiff = &cobra.Command{
|
||||||
Use: "diff snapshot-ID snapshot-ID",
|
Use: "diff [flags] snapshot-ID snapshot-ID",
|
||||||
Short: "Show differences between two snapshots",
|
Short: "Show differences between two snapshots",
|
||||||
Long: `
|
Long: `
|
||||||
The "diff" command shows differences from the first to the second snapshot. The
|
The "diff" command shows differences from the first to the second snapshot. The
|
||||||
@@ -116,10 +116,10 @@ func addBlobs(bs restic.BlobSet, node *restic.Node) {
|
|||||||
|
|
||||||
// DiffStats collects the differences between two snapshots.
|
// DiffStats collects the differences between two snapshots.
|
||||||
type DiffStats struct {
|
type DiffStats struct {
|
||||||
ChangedFiles int
|
ChangedFiles int
|
||||||
Added DiffStat
|
Added DiffStat
|
||||||
Removed DiffStat
|
Removed DiffStat
|
||||||
BlobsBefore, BlobsAfter restic.BlobSet
|
BlobsBefore, BlobsAfter, BlobsCommon restic.BlobSet
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewDiffStats creates new stats for a diff run.
|
// NewDiffStats creates new stats for a diff run.
|
||||||
@@ -127,6 +127,7 @@ func NewDiffStats() *DiffStats {
|
|||||||
return &DiffStats{
|
return &DiffStats{
|
||||||
BlobsBefore: restic.NewBlobSet(),
|
BlobsBefore: restic.NewBlobSet(),
|
||||||
BlobsAfter: restic.NewBlobSet(),
|
BlobsAfter: restic.NewBlobSet(),
|
||||||
|
BlobsCommon: restic.NewBlobSet(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -177,6 +178,27 @@ func (c *Comparer) printDir(ctx context.Context, mode string, stats *DiffStat, b
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (c *Comparer) collectDir(ctx context.Context, blobs restic.BlobSet, id restic.ID) error {
|
||||||
|
debug.Log("print tree %v", id)
|
||||||
|
tree, err := c.repo.LoadTree(ctx, id)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, node := range tree.Nodes {
|
||||||
|
addBlobs(blobs, node)
|
||||||
|
|
||||||
|
if node.Type == "dir" {
|
||||||
|
err := c.collectDir(ctx, blobs, *node.Subtree)
|
||||||
|
if err != nil {
|
||||||
|
Warnf("error: %v\n", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func uniqueNodeNames(tree1, tree2 *restic.Tree) (tree1Nodes, tree2Nodes map[string]*restic.Node, uniqueNames []string) {
|
func uniqueNodeNames(tree1, tree2 *restic.Tree) (tree1Nodes, tree2Nodes map[string]*restic.Node, uniqueNames []string) {
|
||||||
names := make(map[string]struct{})
|
names := make(map[string]struct{})
|
||||||
tree1Nodes = make(map[string]*restic.Node)
|
tree1Nodes = make(map[string]*restic.Node)
|
||||||
@@ -196,7 +218,7 @@ func uniqueNodeNames(tree1, tree2 *restic.Tree) (tree1Nodes, tree2Nodes map[stri
|
|||||||
uniqueNames = append(uniqueNames, name)
|
uniqueNames = append(uniqueNames, name)
|
||||||
}
|
}
|
||||||
|
|
||||||
sort.Sort(sort.StringSlice(uniqueNames))
|
sort.Strings(uniqueNames)
|
||||||
return tree1Nodes, tree2Nodes, uniqueNames
|
return tree1Nodes, tree2Nodes, uniqueNames
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -248,7 +270,12 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStats, prefix string
|
|||||||
}
|
}
|
||||||
|
|
||||||
if node1.Type == "dir" && node2.Type == "dir" {
|
if node1.Type == "dir" && node2.Type == "dir" {
|
||||||
err := c.diffTree(ctx, stats, name, *node1.Subtree, *node2.Subtree)
|
var err error
|
||||||
|
if (*node1.Subtree).Equal(*node2.Subtree) {
|
||||||
|
err = c.collectDir(ctx, stats.BlobsCommon, *node1.Subtree)
|
||||||
|
} else {
|
||||||
|
err = c.diffTree(ctx, stats, name, *node1.Subtree, *node2.Subtree)
|
||||||
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
Warnf("error: %v\n", err)
|
Warnf("error: %v\n", err)
|
||||||
}
|
}
|
||||||
@@ -345,8 +372,8 @@ func runDiff(opts DiffOptions, gopts GlobalOptions, args []string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
both := stats.BlobsBefore.Intersect(stats.BlobsAfter)
|
both := stats.BlobsBefore.Intersect(stats.BlobsAfter)
|
||||||
updateBlobs(repo, stats.BlobsBefore.Sub(both), &stats.Removed)
|
updateBlobs(repo, stats.BlobsBefore.Sub(both).Sub(stats.BlobsCommon), &stats.Removed)
|
||||||
updateBlobs(repo, stats.BlobsAfter.Sub(both), &stats.Added)
|
updateBlobs(repo, stats.BlobsAfter.Sub(both).Sub(stats.BlobsCommon), &stats.Added)
|
||||||
|
|
||||||
Printf("\n")
|
Printf("\n")
|
||||||
Printf("Files: %5d new, %5d removed, %5d changed\n", stats.Added.Files, stats.Removed.Files, stats.ChangedFiles)
|
Printf("Files: %5d new, %5d removed, %5d changed\n", stats.Added.Files, stats.Removed.Files, stats.ChangedFiles)
|
||||||
|
|||||||
@@ -1,19 +1,16 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"archive/tar"
|
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/restic/restic/internal/debug"
|
"github.com/restic/restic/internal/debug"
|
||||||
|
"github.com/restic/restic/internal/dump"
|
||||||
"github.com/restic/restic/internal/errors"
|
"github.com/restic/restic/internal/errors"
|
||||||
"github.com/restic/restic/internal/restic"
|
"github.com/restic/restic/internal/restic"
|
||||||
"github.com/restic/restic/internal/walker"
|
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
@@ -22,8 +19,10 @@ var cmdDump = &cobra.Command{
|
|||||||
Use: "dump [flags] snapshotID file",
|
Use: "dump [flags] snapshotID file",
|
||||||
Short: "Print a backed-up file to stdout",
|
Short: "Print a backed-up file to stdout",
|
||||||
Long: `
|
Long: `
|
||||||
The "dump" command extracts a single file from a snapshot from the repository and
|
The "dump" command extracts files from a snapshot from the repository. If a
|
||||||
prints its contents to stdout.
|
single file is selected, it prints its contents to stdout. Folders are output
|
||||||
|
as a tar file containing the contents of the specified folder. Pass "/" as
|
||||||
|
file name to dump the whole snapshot as a tar file.
|
||||||
|
|
||||||
The special snapshot "latest" can be used to use the latest snapshot in the
|
The special snapshot "latest" can be used to use the latest snapshot in the
|
||||||
repository.
|
repository.
|
||||||
@@ -59,17 +58,14 @@ func init() {
|
|||||||
|
|
||||||
func splitPath(p string) []string {
|
func splitPath(p string) []string {
|
||||||
d, f := path.Split(p)
|
d, f := path.Split(p)
|
||||||
if d == "" {
|
if d == "" || d == "/" {
|
||||||
return []string{f}
|
return []string{f}
|
||||||
}
|
}
|
||||||
if d == "/" {
|
s := splitPath(path.Join("/", d))
|
||||||
return []string{d}
|
|
||||||
}
|
|
||||||
s := splitPath(path.Clean(d))
|
|
||||||
return append(s, f)
|
return append(s, f)
|
||||||
}
|
}
|
||||||
|
|
||||||
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repository, prefix string, pathComponents []string, pathToPrint string) error {
|
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repository, prefix string, pathComponents []string) error {
|
||||||
|
|
||||||
if tree == nil {
|
if tree == nil {
|
||||||
return fmt.Errorf("called with a nil tree")
|
return fmt.Errorf("called with a nil tree")
|
||||||
@@ -81,24 +77,42 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repositor
|
|||||||
if l == 0 {
|
if l == 0 {
|
||||||
return fmt.Errorf("empty path components")
|
return fmt.Errorf("empty path components")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If we print / we need to assume that there are multiple nodes at that
|
||||||
|
// level in the tree.
|
||||||
|
if pathComponents[0] == "" {
|
||||||
|
if err := checkStdoutTar(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return dump.WriteTar(ctx, repo, tree, "/", os.Stdout)
|
||||||
|
}
|
||||||
|
|
||||||
item := filepath.Join(prefix, pathComponents[0])
|
item := filepath.Join(prefix, pathComponents[0])
|
||||||
for _, node := range tree.Nodes {
|
for _, node := range tree.Nodes {
|
||||||
if node.Name == pathComponents[0] || pathComponents[0] == "/" {
|
// If dumping something in the highest level it will just take the
|
||||||
|
// first item it finds and dump that according to the switch case below.
|
||||||
|
if node.Name == pathComponents[0] {
|
||||||
switch {
|
switch {
|
||||||
case l == 1 && node.Type == "file":
|
case l == 1 && dump.IsFile(node):
|
||||||
return getNodeData(ctx, os.Stdout, repo, node)
|
return dump.GetNodeData(ctx, os.Stdout, repo, node)
|
||||||
case l > 1 && node.Type == "dir":
|
case l > 1 && dump.IsDir(node):
|
||||||
subtree, err := repo.LoadTree(ctx, *node.Subtree)
|
subtree, err := repo.LoadTree(ctx, *node.Subtree)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Wrapf(err, "cannot load subtree for %q", item)
|
return errors.Wrapf(err, "cannot load subtree for %q", item)
|
||||||
}
|
}
|
||||||
return printFromTree(ctx, subtree, repo, item, pathComponents[1:], pathToPrint)
|
return printFromTree(ctx, subtree, repo, item, pathComponents[1:])
|
||||||
case node.Type == "dir":
|
case dump.IsDir(node):
|
||||||
node.Path = pathToPrint
|
if err := checkStdoutTar(); err != nil {
|
||||||
return tarTree(ctx, repo, node, pathToPrint)
|
return err
|
||||||
|
}
|
||||||
|
subtree, err := repo.LoadTree(ctx, *node.Subtree)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return dump.WriteTar(ctx, repo, subtree, item, os.Stdout)
|
||||||
case l > 1:
|
case l > 1:
|
||||||
return fmt.Errorf("%q should be a dir, but is a %q", item, node.Type)
|
return fmt.Errorf("%q should be a dir, but is a %q", item, node.Type)
|
||||||
case node.Type != "file":
|
case !dump.IsFile(node):
|
||||||
return fmt.Errorf("%q should be a file, but is a %q", item, node.Type)
|
return fmt.Errorf("%q should be a file, but is a %q", item, node.Type)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -162,7 +176,7 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
|
|||||||
Exitf(2, "loading tree for snapshot %q failed: %v", snapshotIDString, err)
|
Exitf(2, "loading tree for snapshot %q failed: %v", snapshotIDString, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
err = printFromTree(ctx, tree, repo, "", splittedPath, pathToPrint)
|
err = printFromTree(ctx, tree, repo, "/", splittedPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
Exitf(2, "cannot dump file: %v", err)
|
Exitf(2, "cannot dump file: %v", err)
|
||||||
}
|
}
|
||||||
@@ -170,126 +184,9 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func getNodeData(ctx context.Context, output io.Writer, repo restic.Repository, node *restic.Node) error {
|
func checkStdoutTar() error {
|
||||||
var (
|
|
||||||
buf []byte
|
|
||||||
err error
|
|
||||||
)
|
|
||||||
for _, id := range node.Content {
|
|
||||||
buf, err = repo.LoadBlob(ctx, restic.DataBlob, id, buf)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = output.Write(buf)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrap(err, "Write")
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func tarTree(ctx context.Context, repo restic.Repository, rootNode *restic.Node, rootPath string) error {
|
|
||||||
|
|
||||||
if stdoutIsTerminal() {
|
if stdoutIsTerminal() {
|
||||||
return fmt.Errorf("stdout is the terminal, please redirect output")
|
return fmt.Errorf("stdout is the terminal, please redirect output")
|
||||||
}
|
}
|
||||||
|
return nil
|
||||||
tw := tar.NewWriter(os.Stdout)
|
|
||||||
defer tw.Close()
|
|
||||||
|
|
||||||
// If we want to dump "/" we'll need to add the name of the first node, too
|
|
||||||
// as it would get lost otherwise.
|
|
||||||
if rootNode.Path == "/" {
|
|
||||||
rootNode.Path = path.Join(rootNode.Path, rootNode.Name)
|
|
||||||
rootPath = rootNode.Path
|
|
||||||
}
|
|
||||||
|
|
||||||
// we know that rootNode is a folder and walker.Walk will already process
|
|
||||||
// the next node, so we have to tar this one first, too
|
|
||||||
if err := tarNode(ctx, tw, rootNode, repo); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
err := walker.Walk(ctx, repo, *rootNode.Subtree, nil, func(_ restic.ID, nodepath string, node *restic.Node, err error) (bool, error) {
|
|
||||||
if err != nil {
|
|
||||||
return false, err
|
|
||||||
}
|
|
||||||
if node == nil {
|
|
||||||
return false, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
node.Path = path.Join(rootPath, nodepath)
|
|
||||||
|
|
||||||
if node.Type == "file" || node.Type == "symlink" || node.Type == "dir" {
|
|
||||||
err := tarNode(ctx, tw, node, repo)
|
|
||||||
if err != err {
|
|
||||||
return false, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return false, nil
|
|
||||||
})
|
|
||||||
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func tarNode(ctx context.Context, tw *tar.Writer, node *restic.Node, repo restic.Repository) error {
|
|
||||||
|
|
||||||
header := &tar.Header{
|
|
||||||
Name: node.Path,
|
|
||||||
Size: int64(node.Size),
|
|
||||||
Mode: int64(node.Mode),
|
|
||||||
Uid: int(node.UID),
|
|
||||||
Gid: int(node.GID),
|
|
||||||
ModTime: node.ModTime,
|
|
||||||
AccessTime: node.AccessTime,
|
|
||||||
ChangeTime: node.ChangeTime,
|
|
||||||
PAXRecords: parseXattrs(node.ExtendedAttributes),
|
|
||||||
}
|
|
||||||
|
|
||||||
if node.Type == "symlink" {
|
|
||||||
header.Typeflag = tar.TypeSymlink
|
|
||||||
header.Linkname = node.LinkTarget
|
|
||||||
}
|
|
||||||
|
|
||||||
if node.Type == "dir" {
|
|
||||||
header.Typeflag = tar.TypeDir
|
|
||||||
}
|
|
||||||
|
|
||||||
err := tw.WriteHeader(header)
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
return errors.Wrap(err, "TarHeader ")
|
|
||||||
}
|
|
||||||
|
|
||||||
return getNodeData(ctx, tw, repo, node)
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseXattrs(xattrs []restic.ExtendedAttribute) map[string]string {
|
|
||||||
tmpMap := make(map[string]string)
|
|
||||||
|
|
||||||
for _, attr := range xattrs {
|
|
||||||
attrString := string(attr.Value)
|
|
||||||
|
|
||||||
if strings.HasPrefix(attr.Name, "system.posix_acl_") {
|
|
||||||
na := acl{}
|
|
||||||
na.decode(attr.Value)
|
|
||||||
|
|
||||||
if na.String() != "" {
|
|
||||||
if strings.Contains(attr.Name, "system.posix_acl_access") {
|
|
||||||
tmpMap["SCHILY.acl.access"] = na.String()
|
|
||||||
} else if strings.Contains(attr.Name, "system.posix_acl_default") {
|
|
||||||
tmpMap["SCHILY.acl.default"] = na.String()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
} else {
|
|
||||||
tmpMap["SCHILY.xattr."+attr.Name] = attrString
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return tmpMap
|
|
||||||
}
|
}
|
||||||
|
|||||||
27
cmd/restic/cmd_dump_test.go
Normal file
27
cmd/restic/cmd_dump_test.go
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
rtest "github.com/restic/restic/internal/test"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestDumpSplitPath(t *testing.T) {
|
||||||
|
testPaths := []struct {
|
||||||
|
path string
|
||||||
|
result []string
|
||||||
|
}{
|
||||||
|
{"", []string{""}},
|
||||||
|
{"test", []string{"test"}},
|
||||||
|
{"test/dir", []string{"test", "dir"}},
|
||||||
|
{"test/dir/sub", []string{"test", "dir", "sub"}},
|
||||||
|
{"/", []string{""}},
|
||||||
|
{"/test", []string{"test"}},
|
||||||
|
{"/test/dir", []string{"test", "dir"}},
|
||||||
|
{"/test/dir/sub", []string{"test", "dir", "sub"}},
|
||||||
|
}
|
||||||
|
for _, path := range testPaths {
|
||||||
|
parts := splitPath(path.path)
|
||||||
|
rtest.Equals(t, path.result, parts)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -270,7 +270,7 @@ func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error
|
|||||||
|
|
||||||
Printf("Unable to load tree %s\n ... which belongs to snapshot %s.\n", parentTreeID, sn.ID())
|
Printf("Unable to load tree %s\n ... which belongs to snapshot %s.\n", parentTreeID, sn.ID())
|
||||||
|
|
||||||
return false, walker.SkipNode
|
return false, walker.ErrSkipNode
|
||||||
}
|
}
|
||||||
|
|
||||||
if node == nil {
|
if node == nil {
|
||||||
@@ -314,7 +314,7 @@ func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error
|
|||||||
|
|
||||||
if !childMayMatch {
|
if !childMayMatch {
|
||||||
ignoreIfNoMatch = true
|
ignoreIfNoMatch = true
|
||||||
errIfNoMatch = walker.SkipNode
|
errIfNoMatch = walker.ErrSkipNode
|
||||||
} else {
|
} else {
|
||||||
ignoreIfNoMatch = false
|
ignoreIfNoMatch = false
|
||||||
}
|
}
|
||||||
@@ -354,7 +354,7 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
|
|||||||
|
|
||||||
Printf("Unable to load tree %s\n ... which belongs to snapshot %s.\n", parentTreeID, sn.ID())
|
Printf("Unable to load tree %s\n ... which belongs to snapshot %s.\n", parentTreeID, sn.ID())
|
||||||
|
|
||||||
return false, walker.SkipNode
|
return false, walker.ErrSkipNode
|
||||||
}
|
}
|
||||||
|
|
||||||
if node == nil {
|
if node == nil {
|
||||||
@@ -417,7 +417,7 @@ func (f *Finder) packsToBlobs(ctx context.Context, packs []string) error {
|
|||||||
packsFound := 0
|
packsFound := 0
|
||||||
|
|
||||||
debug.Log("Looking for packs...")
|
debug.Log("Looking for packs...")
|
||||||
err := f.repo.List(ctx, restic.DataFile, func(id restic.ID, size int64) error {
|
err := f.repo.List(ctx, restic.PackFile, func(id restic.ID, size int64) error {
|
||||||
if allPacksFound {
|
if allPacksFound {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -465,8 +465,8 @@ func (f *Finder) findObjectPack(ctx context.Context, id string, t restic.BlobTyp
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
blobs, found := idx.Lookup(rid, t)
|
blobs := idx.Lookup(rid, t)
|
||||||
if !found {
|
if len(blobs) == 0 {
|
||||||
Printf("Object %s not found in the index\n", rid.Str())
|
Printf("Object %s not found in the index\n", rid.Str())
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -94,34 +94,22 @@ func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
removeSnapshots := 0
|
|
||||||
|
|
||||||
ctx, cancel := context.WithCancel(gopts.ctx)
|
ctx, cancel := context.WithCancel(gopts.ctx)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
var snapshots restic.Snapshots
|
var snapshots restic.Snapshots
|
||||||
|
removeSnIDs := restic.NewIDSet()
|
||||||
|
|
||||||
for sn := range FindFilteredSnapshots(ctx, repo, opts.Hosts, opts.Tags, opts.Paths, args) {
|
for sn := range FindFilteredSnapshots(ctx, repo, opts.Hosts, opts.Tags, opts.Paths, args) {
|
||||||
snapshots = append(snapshots, sn)
|
snapshots = append(snapshots, sn)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var jsonGroups []*ForgetGroup
|
||||||
|
|
||||||
if len(args) > 0 {
|
if len(args) > 0 {
|
||||||
// When explicit snapshots args are given, remove them immediately.
|
// When explicit snapshots args are given, remove them immediately.
|
||||||
for _, sn := range snapshots {
|
for _, sn := range snapshots {
|
||||||
if !opts.DryRun {
|
removeSnIDs.Insert(*sn.ID())
|
||||||
h := restic.Handle{Type: restic.SnapshotFile, Name: sn.ID().String()}
|
|
||||||
if err = repo.Backend().Remove(gopts.ctx, h); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if !gopts.JSON {
|
|
||||||
Verbosef("removed snapshot %v\n", sn.ID().Str())
|
|
||||||
}
|
|
||||||
removeSnapshots++
|
|
||||||
} else {
|
|
||||||
if !gopts.JSON {
|
|
||||||
Verbosef("would have removed snapshot %v\n", sn.ID().Str())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
snapshotGroups, _, err := restic.GroupSnapshots(snapshots, opts.GroupBy)
|
snapshotGroups, _, err := restic.GroupSnapshots(snapshots, opts.GroupBy)
|
||||||
@@ -151,8 +139,6 @@ func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
|
|||||||
Verbosef("Applying Policy: %v\n", policy)
|
Verbosef("Applying Policy: %v\n", policy)
|
||||||
}
|
}
|
||||||
|
|
||||||
var jsonGroups []*ForgetGroup
|
|
||||||
|
|
||||||
for k, snapshotGroup := range snapshotGroups {
|
for k, snapshotGroup := range snapshotGroups {
|
||||||
if gopts.Verbose >= 1 && !gopts.JSON {
|
if gopts.Verbose >= 1 && !gopts.JSON {
|
||||||
err = PrintSnapshotGroupHeader(gopts.stdout, k)
|
err = PrintSnapshotGroupHeader(gopts.stdout, k)
|
||||||
@@ -191,37 +177,37 @@ func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
|
|||||||
|
|
||||||
jsonGroups = append(jsonGroups, &fg)
|
jsonGroups = append(jsonGroups, &fg)
|
||||||
|
|
||||||
removeSnapshots += len(remove)
|
for _, sn := range remove {
|
||||||
|
removeSnIDs.Insert(*sn.ID())
|
||||||
if !opts.DryRun {
|
|
||||||
for _, sn := range remove {
|
|
||||||
h := restic.Handle{Type: restic.SnapshotFile, Name: sn.ID().String()}
|
|
||||||
err = repo.Backend().Remove(gopts.ctx, h)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if gopts.JSON {
|
|
||||||
err = printJSONForget(gopts.stdout, jsonGroups)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if removeSnapshots > 0 && opts.Prune {
|
if len(removeSnIDs) > 0 {
|
||||||
if !gopts.JSON {
|
|
||||||
Verbosef("%d snapshots have been removed, running prune\n", removeSnapshots)
|
|
||||||
}
|
|
||||||
if !opts.DryRun {
|
if !opts.DryRun {
|
||||||
return pruneRepository(gopts, repo)
|
err := DeleteFilesChecked(gopts, repo, removeSnIDs, restic.SnapshotFile)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if !gopts.JSON {
|
||||||
|
Printf("Would have removed the following snapshots:\n%v\n\n", removeSnIDs)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if gopts.JSON && len(jsonGroups) > 0 {
|
||||||
|
err = printJSONForget(gopts.stdout, jsonGroups)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(removeSnIDs) > 0 && opts.Prune && !opts.DryRun {
|
||||||
|
return pruneRepository(gopts, repo)
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var cmdGenerate = &cobra.Command{
|
var cmdGenerate = &cobra.Command{
|
||||||
Use: "generate [command]",
|
Use: "generate [flags]",
|
||||||
Short: "Generate manual pages and auto-completion files (bash, zsh)",
|
Short: "Generate manual pages and auto-completion files (bash, zsh)",
|
||||||
Long: `
|
Long: `
|
||||||
The "generate" command writes automatically generated files (like the man pages
|
The "generate" command writes automatically generated files (like the man pages
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"github.com/restic/chunker"
|
||||||
"github.com/restic/restic/internal/errors"
|
"github.com/restic/restic/internal/errors"
|
||||||
"github.com/restic/restic/internal/repository"
|
"github.com/restic/restic/internal/repository"
|
||||||
|
|
||||||
@@ -20,19 +21,36 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||||||
`,
|
`,
|
||||||
DisableAutoGenTag: true,
|
DisableAutoGenTag: true,
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return runInit(globalOptions, args)
|
return runInit(initOptions, globalOptions, args)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
// InitOptions bundles all options for the init command.
|
||||||
cmdRoot.AddCommand(cmdInit)
|
type InitOptions struct {
|
||||||
|
secondaryRepoOptions
|
||||||
|
CopyChunkerParameters bool
|
||||||
}
|
}
|
||||||
|
|
||||||
func runInit(gopts GlobalOptions, args []string) error {
|
var initOptions InitOptions
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
cmdRoot.AddCommand(cmdInit)
|
||||||
|
|
||||||
|
f := cmdInit.Flags()
|
||||||
|
initSecondaryRepoOptions(f, &initOptions.secondaryRepoOptions, "secondary", "to copy chunker parameters from")
|
||||||
|
f.BoolVar(&initOptions.CopyChunkerParameters, "copy-chunker-params", false, "copy chunker parameters from the secondary repository (useful with the copy command)")
|
||||||
|
}
|
||||||
|
|
||||||
|
func runInit(opts InitOptions, gopts GlobalOptions, args []string) error {
|
||||||
if gopts.Repo == "" {
|
if gopts.Repo == "" {
|
||||||
return errors.Fatal("Please specify repository location (-r)")
|
return errors.Fatal("Please specify repository location (-r)")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
chunkerPolynomial, err := maybeReadChunkerPolynomial(opts, gopts)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
be, err := create(gopts.Repo, gopts.extended)
|
be, err := create(gopts.Repo, gopts.extended)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Fatalf("create repository at %s failed: %v\n", gopts.Repo, err)
|
return errors.Fatalf("create repository at %s failed: %v\n", gopts.Repo, err)
|
||||||
@@ -47,7 +65,7 @@ func runInit(gopts GlobalOptions, args []string) error {
|
|||||||
|
|
||||||
s := repository.New(be)
|
s := repository.New(be)
|
||||||
|
|
||||||
err = s.Init(gopts.ctx, gopts.password)
|
err = s.Init(gopts.ctx, gopts.password, chunkerPolynomial)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Fatalf("create key in repository at %s failed: %v\n", gopts.Repo, err)
|
return errors.Fatalf("create key in repository at %s failed: %v\n", gopts.Repo, err)
|
||||||
}
|
}
|
||||||
@@ -60,3 +78,25 @@ func runInit(gopts GlobalOptions, args []string) error {
|
|||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func maybeReadChunkerPolynomial(opts InitOptions, gopts GlobalOptions) (*chunker.Pol, error) {
|
||||||
|
if opts.CopyChunkerParameters {
|
||||||
|
otherGopts, err := fillSecondaryGlobalOpts(opts.secondaryRepoOptions, gopts, "secondary")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
otherRepo, err := OpenRepository(otherGopts)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
pol := otherRepo.Config().ChunkerPolynomial
|
||||||
|
return &pol, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if opts.Repo != "" {
|
||||||
|
return nil, errors.Fatal("Secondary repository must only be specified when copying the chunker parameters")
|
||||||
|
}
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var cmdKey = &cobra.Command{
|
var cmdKey = &cobra.Command{
|
||||||
Use: "key [list|add|remove|passwd] [ID]",
|
Use: "key [flags] [list|add|remove|passwd] [ID]",
|
||||||
Short: "Manage keys (passwords)",
|
Short: "Manage keys (passwords)",
|
||||||
Long: `
|
Long: `
|
||||||
The "key" command manages keys (passwords) for accessing the repository.
|
The "key" command manages keys (passwords) for accessing the repository.
|
||||||
@@ -32,13 +32,19 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var newPasswordFile string
|
var (
|
||||||
|
newPasswordFile string
|
||||||
|
keyUsername string
|
||||||
|
keyHostname string
|
||||||
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
cmdRoot.AddCommand(cmdKey)
|
cmdRoot.AddCommand(cmdKey)
|
||||||
|
|
||||||
flags := cmdKey.Flags()
|
flags := cmdKey.Flags()
|
||||||
flags.StringVarP(&newPasswordFile, "new-password-file", "", "", "the file from which to load a new password")
|
flags.StringVarP(&newPasswordFile, "new-password-file", "", "", "`file` from which to read the new password")
|
||||||
|
flags.StringVarP(&keyUsername, "user", "", "", "the username for new keys")
|
||||||
|
flags.StringVarP(&keyHostname, "host", "", "", "the hostname for new keys")
|
||||||
}
|
}
|
||||||
|
|
||||||
func listKeys(ctx context.Context, s *repository.Repository, gopts GlobalOptions) error {
|
func listKeys(ctx context.Context, s *repository.Repository, gopts GlobalOptions) error {
|
||||||
@@ -110,7 +116,7 @@ func getNewPassword(gopts GlobalOptions) (string, error) {
|
|||||||
newopts.password = ""
|
newopts.password = ""
|
||||||
|
|
||||||
return ReadPasswordTwice(newopts,
|
return ReadPasswordTwice(newopts,
|
||||||
"enter password for new key: ",
|
"enter new password: ",
|
||||||
"enter password again: ")
|
"enter password again: ")
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -120,7 +126,7 @@ func addKey(gopts GlobalOptions, repo *repository.Repository) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
id, err := repository.AddKey(gopts.ctx, repo, pw, repo.Key())
|
id, err := repository.AddKey(gopts.ctx, repo, pw, keyUsername, keyHostname, repo.Key())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Fatalf("creating new key failed: %v\n", err)
|
return errors.Fatalf("creating new key failed: %v\n", err)
|
||||||
}
|
}
|
||||||
@@ -151,7 +157,7 @@ func changePassword(gopts GlobalOptions, repo *repository.Repository) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
id, err := repository.AddKey(gopts.ctx, repo, pw, repo.Key())
|
id, err := repository.AddKey(gopts.ctx, repo, pw, "", "", repo.Key())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Fatalf("creating new key failed: %v\n", err)
|
return errors.Fatalf("creating new key failed: %v\n", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,17 +1,15 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
|
||||||
|
|
||||||
"github.com/restic/restic/internal/errors"
|
"github.com/restic/restic/internal/errors"
|
||||||
"github.com/restic/restic/internal/index"
|
"github.com/restic/restic/internal/repository"
|
||||||
"github.com/restic/restic/internal/restic"
|
"github.com/restic/restic/internal/restic"
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
var cmdList = &cobra.Command{
|
var cmdList = &cobra.Command{
|
||||||
Use: "list [blobs|packs|index|snapshots|keys|locks]",
|
Use: "list [flags] [blobs|packs|index|snapshots|keys|locks]",
|
||||||
Short: "List objects in the repository",
|
Short: "List objects in the repository",
|
||||||
Long: `
|
Long: `
|
||||||
The "list" command allows listing objects in the repository based on type.
|
The "list" command allows listing objects in the repository based on type.
|
||||||
@@ -52,7 +50,7 @@ func runList(cmd *cobra.Command, opts GlobalOptions, args []string) error {
|
|||||||
var t restic.FileType
|
var t restic.FileType
|
||||||
switch args[0] {
|
switch args[0] {
|
||||||
case "packs":
|
case "packs":
|
||||||
t = restic.DataFile
|
t = restic.PackFile
|
||||||
case "index":
|
case "index":
|
||||||
t = restic.IndexFile
|
t = restic.IndexFile
|
||||||
case "snapshots":
|
case "snapshots":
|
||||||
@@ -62,18 +60,17 @@ func runList(cmd *cobra.Command, opts GlobalOptions, args []string) error {
|
|||||||
case "locks":
|
case "locks":
|
||||||
t = restic.LockFile
|
t = restic.LockFile
|
||||||
case "blobs":
|
case "blobs":
|
||||||
idx, err := index.Load(opts.ctx, repo, nil)
|
return repo.List(opts.ctx, restic.IndexFile, func(id restic.ID, size int64) error {
|
||||||
if err != nil {
|
idx, err := repository.LoadIndex(opts.ctx, repo, id)
|
||||||
return err
|
if err != nil {
|
||||||
}
|
return err
|
||||||
|
|
||||||
for _, pack := range idx.Packs {
|
|
||||||
for _, entry := range pack.Entries {
|
|
||||||
fmt.Printf("%v %v\n", entry.Type, entry.ID)
|
|
||||||
}
|
}
|
||||||
}
|
for blobs := range idx.Each(opts.ctx) {
|
||||||
|
Printf("%v %v\n", blobs.Type, blobs.ID)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
return nil
|
|
||||||
default:
|
default:
|
||||||
return errors.Fatal("invalid type")
|
return errors.Fatal("invalid type")
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var cmdLs = &cobra.Command{
|
var cmdLs = &cobra.Command{
|
||||||
Use: "ls [flags] [snapshotID] [dir...]",
|
Use: "ls [flags] snapshotID [dir...]",
|
||||||
Short: "List files in a snapshot",
|
Short: "List files in a snapshot",
|
||||||
Long: `
|
Long: `
|
||||||
The "ls" command lists files and directories in a snapshot.
|
The "ls" command lists files and directories in a snapshot.
|
||||||
@@ -89,8 +89,8 @@ type lsNode struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
|
func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
|
||||||
if len(args) == 0 && len(opts.Hosts) == 0 && len(opts.Tags) == 0 && len(opts.Paths) == 0 {
|
if len(args) == 0 {
|
||||||
return errors.Fatal("Invalid arguments, either give one or more snapshot IDs or set filters.")
|
return errors.Fatal("no snapshot ID specified")
|
||||||
}
|
}
|
||||||
|
|
||||||
// extract any specific directories to walk
|
// extract any specific directories to walk
|
||||||
@@ -222,7 +222,7 @@ func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
|
|||||||
// otherwise, signal the walker to not walk recursively into any
|
// otherwise, signal the walker to not walk recursively into any
|
||||||
// subdirs
|
// subdirs
|
||||||
if node.Type == "dir" {
|
if node.Type == "dir" {
|
||||||
return false, walker.SkipNode
|
return false, walker.ErrSkipNode
|
||||||
}
|
}
|
||||||
return false, nil
|
return false, nil
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var cmdMigrate = &cobra.Command{
|
var cmdMigrate = &cobra.Command{
|
||||||
Use: "migrate [name]",
|
Use: "migrate [flags] [name]",
|
||||||
Short: "Apply migrations",
|
Short: "Apply migrations",
|
||||||
Long: `
|
Long: `
|
||||||
The "migrate" command applies migrations to a repository. When no migration
|
The "migrate" command applies migrations to a repository. When no migration
|
||||||
|
|||||||
@@ -1,7 +1,4 @@
|
|||||||
// +build !netbsd
|
// +build darwin freebsd linux
|
||||||
// +build !openbsd
|
|
||||||
// +build !solaris
|
|
||||||
// +build !windows
|
|
||||||
|
|
||||||
package main
|
package main
|
||||||
|
|
||||||
@@ -93,10 +90,12 @@ func mount(opts MountOptions, gopts GlobalOptions, mountpoint string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
lock, err := lockRepo(repo)
|
if !gopts.NoLock {
|
||||||
defer unlockRepo(lock)
|
lock, err := lockRepo(repo)
|
||||||
if err != nil {
|
defer unlockRepo(lock)
|
||||||
return err
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
err = repo.LoadIndex(gopts.ctx)
|
err = repo.LoadIndex(gopts.ctx)
|
||||||
@@ -142,10 +141,7 @@ func mount(opts MountOptions, gopts GlobalOptions, mountpoint string) error {
|
|||||||
Paths: opts.Paths,
|
Paths: opts.Paths,
|
||||||
SnapshotTemplate: opts.SnapshotTemplate,
|
SnapshotTemplate: opts.SnapshotTemplate,
|
||||||
}
|
}
|
||||||
root, err := fuse.NewRoot(gopts.ctx, repo, cfg)
|
root := fuse.NewRoot(repo, cfg)
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
Printf("Now serving the repository at %s\n", mountpoint)
|
Printf("Now serving the repository at %s\n", mountpoint)
|
||||||
Printf("When finished, quit with Ctrl-c or umount the mountpoint.\n")
|
Printf("When finished, quit with Ctrl-c or umount the mountpoint.\n")
|
||||||
@@ -183,7 +179,7 @@ func runMount(opts MountOptions, gopts GlobalOptions, args []string) error {
|
|||||||
debug.Log("running umount cleanup handler for mount at %v", mountpoint)
|
debug.Log("running umount cleanup handler for mount at %v", mountpoint)
|
||||||
err := umount(mountpoint)
|
err := umount(mountpoint)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
Warnf("unable to umount (maybe already umounted?): %v\n", err)
|
Warnf("unable to umount (maybe already umounted or still in use?): %v\n", err)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -1,9 +1,6 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/restic/restic/internal/debug"
|
"github.com/restic/restic/internal/debug"
|
||||||
"github.com/restic/restic/internal/errors"
|
"github.com/restic/restic/internal/errors"
|
||||||
"github.com/restic/restic/internal/index"
|
"github.com/restic/restic/internal/index"
|
||||||
@@ -47,34 +44,6 @@ func shortenStatus(maxLength int, s string) string {
|
|||||||
return s[:maxLength-3] + "..."
|
return s[:maxLength-3] + "..."
|
||||||
}
|
}
|
||||||
|
|
||||||
// newProgressMax returns a progress that counts blobs.
|
|
||||||
func newProgressMax(show bool, max uint64, description string) *restic.Progress {
|
|
||||||
if !show {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
p := restic.NewProgress()
|
|
||||||
|
|
||||||
p.OnUpdate = func(s restic.Stat, d time.Duration, ticker bool) {
|
|
||||||
status := fmt.Sprintf("[%s] %s %d / %d %s",
|
|
||||||
formatDuration(d),
|
|
||||||
formatPercent(s.Blobs, max),
|
|
||||||
s.Blobs, max, description)
|
|
||||||
|
|
||||||
if w := stdoutTerminalWidth(); w > 0 {
|
|
||||||
status = shortenStatus(w, status)
|
|
||||||
}
|
|
||||||
|
|
||||||
PrintProgress("%s", status)
|
|
||||||
}
|
|
||||||
|
|
||||||
p.OnDone = func(s restic.Stat, d time.Duration, ticker bool) {
|
|
||||||
fmt.Printf("\n")
|
|
||||||
}
|
|
||||||
|
|
||||||
return p
|
|
||||||
}
|
|
||||||
|
|
||||||
func runPrune(gopts GlobalOptions) error {
|
func runPrune(gopts GlobalOptions) error {
|
||||||
repo, err := OpenRepository(gopts)
|
repo, err := OpenRepository(gopts)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -87,6 +56,9 @@ func runPrune(gopts GlobalOptions) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// we do not need index updates while pruning!
|
||||||
|
repo.DisableAutoIndexUpdate()
|
||||||
|
|
||||||
return pruneRepository(gopts, repo)
|
return pruneRepository(gopts, repo)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -125,7 +97,7 @@ func pruneRepository(gopts GlobalOptions, repo restic.Repository) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
Verbosef("counting files in repo\n")
|
Verbosef("counting files in repo\n")
|
||||||
err = repo.List(ctx, restic.DataFile, func(restic.ID, int64) error {
|
err = repo.List(ctx, restic.PackFile, func(restic.ID, int64) error {
|
||||||
stats.packs++
|
stats.packs++
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
@@ -183,34 +155,22 @@ func pruneRepository(gopts GlobalOptions, repo restic.Repository) error {
|
|||||||
|
|
||||||
stats.snapshots = len(snapshots)
|
stats.snapshots = len(snapshots)
|
||||||
|
|
||||||
Verbosef("find data that is still in use for %d snapshots\n", stats.snapshots)
|
usedBlobs, err := getUsedBlobs(gopts, repo, snapshots)
|
||||||
|
if err != nil {
|
||||||
usedBlobs := restic.NewBlobSet()
|
return err
|
||||||
seenBlobs := restic.NewBlobSet()
|
|
||||||
|
|
||||||
bar = newProgressMax(!gopts.Quiet, uint64(len(snapshots)), "snapshots")
|
|
||||||
bar.Start()
|
|
||||||
for _, sn := range snapshots {
|
|
||||||
debug.Log("process snapshot %v", sn.ID())
|
|
||||||
|
|
||||||
err = restic.FindUsedBlobs(ctx, repo, *sn.Tree, usedBlobs, seenBlobs)
|
|
||||||
if err != nil {
|
|
||||||
if repo.Backend().IsNotExist(err) {
|
|
||||||
return errors.Fatal("unable to load a tree from the repo: " + err.Error())
|
|
||||||
}
|
|
||||||
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
debug.Log("processed snapshot %v", sn.ID())
|
|
||||||
bar.Report(restic.Stat{Blobs: 1})
|
|
||||||
}
|
}
|
||||||
bar.Done()
|
|
||||||
|
|
||||||
if len(usedBlobs) > stats.blobs {
|
var missingBlobs []restic.BlobHandle
|
||||||
return errors.Fatalf("number of used blobs is larger than number of available blobs!\n" +
|
for h := range usedBlobs {
|
||||||
"Please report this error (along with the output of the 'prune' run) at\n" +
|
if _, ok := blobCount[h]; !ok {
|
||||||
"https://github.com/restic/restic/issues/new")
|
missingBlobs = append(missingBlobs, h)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(missingBlobs) > 0 {
|
||||||
|
return errors.Fatalf("%v not found in the new index\n"+
|
||||||
|
"Data blobs seem to be missing, aborting prune to prevent further data loss!\n"+
|
||||||
|
"Please report this error (along with the output of the 'prune' run) at\n"+
|
||||||
|
"https://github.com/restic/restic/issues/new/choose", missingBlobs)
|
||||||
}
|
}
|
||||||
|
|
||||||
Verbosef("found %d of %d data blobs still in use, removing %d blobs\n",
|
Verbosef("found %d of %d data blobs still in use, removing %d blobs\n",
|
||||||
@@ -278,13 +238,11 @@ func pruneRepository(gopts GlobalOptions, repo restic.Repository) error {
|
|||||||
|
|
||||||
var obsoletePacks restic.IDSet
|
var obsoletePacks restic.IDSet
|
||||||
if len(rewritePacks) != 0 {
|
if len(rewritePacks) != 0 {
|
||||||
bar = newProgressMax(!gopts.Quiet, uint64(len(rewritePacks)), "packs rewritten")
|
bar := newProgressMax(!gopts.Quiet, uint64(len(rewritePacks)), "packs rewritten")
|
||||||
bar.Start()
|
|
||||||
obsoletePacks, err = repository.Repack(ctx, repo, rewritePacks, usedBlobs, bar)
|
obsoletePacks, err = repository.Repack(ctx, repo, rewritePacks, usedBlobs, bar)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
bar.Done()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
removePacks.Merge(obsoletePacks)
|
removePacks.Merge(obsoletePacks)
|
||||||
@@ -294,19 +252,38 @@ func pruneRepository(gopts GlobalOptions, repo restic.Repository) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if len(removePacks) != 0 {
|
if len(removePacks) != 0 {
|
||||||
bar = newProgressMax(!gopts.Quiet, uint64(len(removePacks)), "packs deleted")
|
Verbosef("remove %d old packs\n", len(removePacks))
|
||||||
bar.Start()
|
DeleteFiles(gopts, repo, removePacks, restic.PackFile)
|
||||||
for packID := range removePacks {
|
|
||||||
h := restic.Handle{Type: restic.DataFile, Name: packID.String()}
|
|
||||||
err = repo.Backend().Remove(ctx, h)
|
|
||||||
if err != nil {
|
|
||||||
Warnf("unable to remove file %v from the repository\n", packID.Str())
|
|
||||||
}
|
|
||||||
bar.Report(restic.Stat{Blobs: 1})
|
|
||||||
}
|
|
||||||
bar.Done()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
Verbosef("done\n")
|
Verbosef("done\n")
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func getUsedBlobs(gopts GlobalOptions, repo restic.Repository, snapshots []*restic.Snapshot) (usedBlobs restic.BlobSet, err error) {
|
||||||
|
ctx := gopts.ctx
|
||||||
|
|
||||||
|
Verbosef("find data that is still in use for %d snapshots\n", len(snapshots))
|
||||||
|
|
||||||
|
usedBlobs = restic.NewBlobSet()
|
||||||
|
|
||||||
|
bar := newProgressMax(!gopts.Quiet, uint64(len(snapshots)), "snapshots")
|
||||||
|
bar.Start()
|
||||||
|
defer bar.Done()
|
||||||
|
for _, sn := range snapshots {
|
||||||
|
debug.Log("process snapshot %v", sn.ID())
|
||||||
|
|
||||||
|
err = restic.FindUsedBlobs(ctx, repo, *sn.Tree, usedBlobs)
|
||||||
|
if err != nil {
|
||||||
|
if repo.Backend().IsNotExist(err) {
|
||||||
|
return nil, errors.Fatal("unable to load a tree from the repo: " + err.Error())
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
debug.Log("processed snapshot %v", sn.ID())
|
||||||
|
bar.Report(restic.Stat{Blobs: 1})
|
||||||
|
}
|
||||||
|
return usedBlobs, nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -53,7 +53,7 @@ func rebuildIndex(ctx context.Context, repo restic.Repository, ignorePacks resti
|
|||||||
Verbosef("counting files in repo\n")
|
Verbosef("counting files in repo\n")
|
||||||
|
|
||||||
var packs uint64
|
var packs uint64
|
||||||
err := repo.List(ctx, restic.DataFile, func(restic.ID, int64) error {
|
err := repo.List(ctx, restic.PackFile, func(restic.ID, int64) error {
|
||||||
packs++
|
packs++
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
@@ -92,14 +92,9 @@ func rebuildIndex(ctx context.Context, repo restic.Repository, ignorePacks resti
|
|||||||
Verbosef("saved new indexes as %v\n", ids)
|
Verbosef("saved new indexes as %v\n", ids)
|
||||||
|
|
||||||
Verbosef("remove %d old index files\n", len(supersedes))
|
Verbosef("remove %d old index files\n", len(supersedes))
|
||||||
|
err = DeleteFilesChecked(globalOptions, repo, restic.NewIDSet(supersedes...), restic.IndexFile)
|
||||||
for _, id := range supersedes {
|
if err != nil {
|
||||||
if err := repo.Backend().Remove(ctx, restic.Handle{
|
return errors.Fatalf("unable to remove an old index: %v\n", err)
|
||||||
Type: restic.IndexFile,
|
|
||||||
Name: id.String(),
|
|
||||||
}); err != nil {
|
|
||||||
Warnf("error removing old index %v: %v\n", id.Str(), err)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
|||||||
@@ -130,11 +130,6 @@ func runRecover(gopts GlobalOptions) error {
|
|||||||
return errors.Fatalf("unable to save blobs to the repo: %v", err)
|
return errors.Fatalf("unable to save blobs to the repo: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
err = repo.SaveIndex(gopts.ctx)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Fatalf("unable to save new index to the repo: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
sn, err := restic.NewSnapshot([]string{"/recover"}, []string{}, hostname, time.Now())
|
sn, err := restic.NewSnapshot([]string{"/recover"}, []string{}, hostname, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Fatalf("unable to save snapshot: %v", err)
|
return errors.Fatalf("unable to save snapshot: %v", err)
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var cmdSnapshots = &cobra.Command{
|
var cmdSnapshots = &cobra.Command{
|
||||||
Use: "snapshots [snapshotID ...]",
|
Use: "snapshots [flags] [snapshotID ...]",
|
||||||
Short: "List all snapshots",
|
Short: "List all snapshots",
|
||||||
Long: `
|
Long: `
|
||||||
The "snapshots" command lists all snapshots stored in the repository.
|
The "snapshots" command lists all snapshots stored in the repository.
|
||||||
@@ -251,9 +251,8 @@ func PrintSnapshots(stdout io.Writer, list restic.Snapshots, reasons []restic.Ke
|
|||||||
// Prints nothing, if we did not group at all.
|
// Prints nothing, if we did not group at all.
|
||||||
func PrintSnapshotGroupHeader(stdout io.Writer, groupKeyJSON string) error {
|
func PrintSnapshotGroupHeader(stdout io.Writer, groupKeyJSON string) error {
|
||||||
var key restic.SnapshotGroupKey
|
var key restic.SnapshotGroupKey
|
||||||
var err error
|
|
||||||
|
|
||||||
err = json.Unmarshal([]byte(groupKeyJSON), &key)
|
err := json.Unmarshal([]byte(groupKeyJSON), &key)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,31 +2,31 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"crypto/sha256"
|
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
|
||||||
"github.com/restic/restic/internal/errors"
|
|
||||||
"github.com/restic/restic/internal/restic"
|
"github.com/restic/restic/internal/restic"
|
||||||
"github.com/restic/restic/internal/walker"
|
"github.com/restic/restic/internal/walker"
|
||||||
|
|
||||||
|
"github.com/minio/sha256-simd"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
var cmdStats = &cobra.Command{
|
var cmdStats = &cobra.Command{
|
||||||
Use: "stats [flags] [snapshot-ID]",
|
Use: "stats [flags] [snapshot ID] [...]",
|
||||||
Short: "Scan the repository and show basic statistics",
|
Short: "Scan the repository and show basic statistics",
|
||||||
Long: `
|
Long: `
|
||||||
The "stats" command walks one or all snapshots in a repository and
|
The "stats" command walks one or multiple snapshots in a repository
|
||||||
accumulates statistics about the data stored therein. It reports on
|
and accumulates statistics about the data stored therein. It reports
|
||||||
the number of unique files and their sizes, according to one of
|
on the number of unique files and their sizes, according to one of
|
||||||
the counting modes as given by the --mode flag.
|
the counting modes as given by the --mode flag.
|
||||||
|
|
||||||
If no snapshot is specified, all snapshots will be considered. Some
|
It operates on all snapshots matching the selection criteria or all
|
||||||
modes make more sense over just a single snapshot, while others
|
snapshots if nothing is specified. The special snapshot ID "latest"
|
||||||
are useful across all snapshots, depending on what you are trying
|
is also supported. Some modes make more sense over
|
||||||
to calculate.
|
just a single snapshot, while others are useful across all snapshots,
|
||||||
|
depending on what you are trying to calculate.
|
||||||
|
|
||||||
The modes are:
|
The modes are:
|
||||||
|
|
||||||
@@ -50,11 +50,26 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// StatsOptions collects all options for the stats command.
|
||||||
|
type StatsOptions struct {
|
||||||
|
// the mode of counting to perform (see consts for available modes)
|
||||||
|
countMode string
|
||||||
|
|
||||||
|
// filter snapshots by, if given by user
|
||||||
|
Hosts []string
|
||||||
|
Tags restic.TagLists
|
||||||
|
Paths []string
|
||||||
|
}
|
||||||
|
|
||||||
|
var statsOptions StatsOptions
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
cmdRoot.AddCommand(cmdStats)
|
cmdRoot.AddCommand(cmdStats)
|
||||||
f := cmdStats.Flags()
|
f := cmdStats.Flags()
|
||||||
f.StringVar(&countMode, "mode", countModeRestoreSize, "counting mode: restore-size (default), files-by-contents, blobs-per-file, or raw-data")
|
f.StringVar(&statsOptions.countMode, "mode", countModeRestoreSize, "counting mode: restore-size (default), files-by-contents, blobs-per-file or raw-data")
|
||||||
f.StringArrayVarP(&snapshotByHosts, "host", "H", nil, "filter latest snapshot by this hostname (can be specified multiple times)")
|
f.StringArrayVarP(&statsOptions.Hosts, "host", "H", nil, "only consider snapshots with the given `host` (can be specified multiple times)")
|
||||||
|
f.Var(&statsOptions.Tags, "tag", "only consider snapshots which include this `taglist` in the format `tag[,tag,...]` (can be specified multiple times)")
|
||||||
|
f.StringArrayVar(&statsOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path` (can be specified multiple times)")
|
||||||
}
|
}
|
||||||
|
|
||||||
func runStats(gopts GlobalOptions, args []string) error {
|
func runStats(gopts GlobalOptions, args []string) error {
|
||||||
@@ -89,53 +104,25 @@ func runStats(gopts GlobalOptions, args []string) error {
|
|||||||
|
|
||||||
// create a container for the stats (and other needed state)
|
// create a container for the stats (and other needed state)
|
||||||
stats := &statsContainer{
|
stats := &statsContainer{
|
||||||
uniqueFiles: make(map[fileID]struct{}),
|
uniqueFiles: make(map[fileID]struct{}),
|
||||||
uniqueInodes: make(map[uint64]struct{}),
|
uniqueInodes: make(map[uint64]struct{}),
|
||||||
fileBlobs: make(map[string]restic.IDSet),
|
fileBlobs: make(map[string]restic.IDSet),
|
||||||
blobs: restic.NewBlobSet(),
|
blobs: restic.NewBlobSet(),
|
||||||
blobsSeen: restic.NewBlobSet(),
|
snapshotsCount: 0,
|
||||||
}
|
}
|
||||||
|
|
||||||
if snapshotIDString != "" {
|
for sn := range FindFilteredSnapshots(ctx, repo, statsOptions.Hosts, statsOptions.Tags, statsOptions.Paths, args) {
|
||||||
// scan just a single snapshot
|
err = statsWalkSnapshot(ctx, sn, repo, stats)
|
||||||
|
|
||||||
var sID restic.ID
|
|
||||||
if snapshotIDString == "latest" {
|
|
||||||
sID, err = restic.FindLatestSnapshot(ctx, repo, []string{}, []restic.TagList{}, snapshotByHosts)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Fatalf("latest snapshot for criteria not found: %v", err)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
sID, err = restic.FindSnapshot(repo, snapshotIDString)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Fatalf("error loading snapshot: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
snapshot, err := restic.LoadSnapshot(ctx, repo, sID)
|
|
||||||
if err != nil {
|
|
||||||
return errors.Fatalf("error loading snapshot from repo: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = statsWalkSnapshot(ctx, snapshot, repo, stats)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("error walking snapshot: %v", err)
|
return fmt.Errorf("error walking snapshot: %v", err)
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
// iterate every snapshot in the repo
|
|
||||||
err = repo.List(ctx, restic.SnapshotFile, func(snapshotID restic.ID, size int64) error {
|
|
||||||
snapshot, err := restic.LoadSnapshot(ctx, repo, snapshotID)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("Error loading snapshot %s: %v", snapshotID.Str(), err)
|
|
||||||
}
|
|
||||||
return statsWalkSnapshot(ctx, snapshot, repo, stats)
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if countMode == countModeRawData {
|
if statsOptions.countMode == countModeRawData {
|
||||||
// the blob handles have been collected, but not yet counted
|
// the blob handles have been collected, but not yet counted
|
||||||
for blobHandle := range stats.blobs {
|
for blobHandle := range stats.blobs {
|
||||||
blobSize, found := repo.LookupBlobSize(blobHandle.ID, blobHandle.Type)
|
blobSize, found := repo.LookupBlobSize(blobHandle.ID, blobHandle.Type)
|
||||||
@@ -148,29 +135,23 @@ func runStats(gopts GlobalOptions, args []string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if gopts.JSON {
|
if gopts.JSON {
|
||||||
err = json.NewEncoder(os.Stdout).Encode(stats)
|
err = json.NewEncoder(globalOptions.stdout).Encode(stats)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("encoding output: %v", err)
|
return fmt.Errorf("encoding output: %v", err)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// inform the user what was scanned and how it was scanned
|
Printf("Stats in %s mode:\n", statsOptions.countMode)
|
||||||
snapshotsScanned := snapshotIDString
|
Printf("Snapshots processed: %d\n", stats.snapshotsCount)
|
||||||
if snapshotsScanned == "latest" {
|
|
||||||
snapshotsScanned = "the latest snapshot"
|
|
||||||
} else if snapshotsScanned == "" {
|
|
||||||
snapshotsScanned = "all snapshots"
|
|
||||||
}
|
|
||||||
Printf("Stats for %s in %s mode:\n", snapshotsScanned, countMode)
|
|
||||||
|
|
||||||
if stats.TotalBlobCount > 0 {
|
if stats.TotalBlobCount > 0 {
|
||||||
Printf(" Total Blob Count: %d\n", stats.TotalBlobCount)
|
Printf(" Total Blob Count: %d\n", stats.TotalBlobCount)
|
||||||
}
|
}
|
||||||
if stats.TotalFileCount > 0 {
|
if stats.TotalFileCount > 0 {
|
||||||
Printf(" Total File Count: %d\n", stats.TotalFileCount)
|
Printf(" Total File Count: %d\n", stats.TotalFileCount)
|
||||||
}
|
}
|
||||||
Printf(" Total Size: %-5s\n", formatBytes(stats.TotalSize))
|
Printf(" Total Size: %-5s\n", formatBytes(stats.TotalSize))
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -180,16 +161,19 @@ func statsWalkSnapshot(ctx context.Context, snapshot *restic.Snapshot, repo rest
|
|||||||
return fmt.Errorf("snapshot %s has nil tree", snapshot.ID().Str())
|
return fmt.Errorf("snapshot %s has nil tree", snapshot.ID().Str())
|
||||||
}
|
}
|
||||||
|
|
||||||
if countMode == countModeRawData {
|
stats.snapshotsCount++
|
||||||
|
|
||||||
|
if statsOptions.countMode == countModeRawData {
|
||||||
// count just the sizes of unique blobs; we don't need to walk the tree
|
// count just the sizes of unique blobs; we don't need to walk the tree
|
||||||
// ourselves in this case, since a nifty function does it for us
|
// ourselves in this case, since a nifty function does it for us
|
||||||
return restic.FindUsedBlobs(ctx, repo, *snapshot.Tree, stats.blobs, stats.blobsSeen)
|
return restic.FindUsedBlobs(ctx, repo, *snapshot.Tree, stats.blobs)
|
||||||
}
|
}
|
||||||
|
|
||||||
err := walker.Walk(ctx, repo, *snapshot.Tree, restic.NewIDSet(), statsWalkTree(repo, stats))
|
err := walker.Walk(ctx, repo, *snapshot.Tree, restic.NewIDSet(), statsWalkTree(repo, stats))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("walking tree %s: %v", *snapshot.Tree, err)
|
return fmt.Errorf("walking tree %s: %v", *snapshot.Tree, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -202,19 +186,19 @@ func statsWalkTree(repo restic.Repository, stats *statsContainer) walker.WalkFun
|
|||||||
return true, nil
|
return true, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
if countMode == countModeUniqueFilesByContents || countMode == countModeBlobsPerFile {
|
if statsOptions.countMode == countModeUniqueFilesByContents || statsOptions.countMode == countModeBlobsPerFile {
|
||||||
// only count this file if we haven't visited it before
|
// only count this file if we haven't visited it before
|
||||||
fid := makeFileIDByContents(node)
|
fid := makeFileIDByContents(node)
|
||||||
if _, ok := stats.uniqueFiles[fid]; !ok {
|
if _, ok := stats.uniqueFiles[fid]; !ok {
|
||||||
// mark the file as visited
|
// mark the file as visited
|
||||||
stats.uniqueFiles[fid] = struct{}{}
|
stats.uniqueFiles[fid] = struct{}{}
|
||||||
|
|
||||||
if countMode == countModeUniqueFilesByContents {
|
if statsOptions.countMode == countModeUniqueFilesByContents {
|
||||||
// simply count the size of each unique file (unique by contents only)
|
// simply count the size of each unique file (unique by contents only)
|
||||||
stats.TotalSize += node.Size
|
stats.TotalSize += node.Size
|
||||||
stats.TotalFileCount++
|
stats.TotalFileCount++
|
||||||
}
|
}
|
||||||
if countMode == countModeBlobsPerFile {
|
if statsOptions.countMode == countModeBlobsPerFile {
|
||||||
// count the size of each unique blob reference, which is
|
// count the size of each unique blob reference, which is
|
||||||
// by unique file (unique by contents and file path)
|
// by unique file (unique by contents and file path)
|
||||||
for _, blobID := range node.Content {
|
for _, blobID := range node.Content {
|
||||||
@@ -244,7 +228,7 @@ func statsWalkTree(repo restic.Repository, stats *statsContainer) walker.WalkFun
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if countMode == countModeRestoreSize {
|
if statsOptions.countMode == countModeRestoreSize {
|
||||||
// as this is a file in the snapshot, we can simply count its
|
// as this is a file in the snapshot, we can simply count its
|
||||||
// size without worrying about uniqueness, since duplicate files
|
// size without worrying about uniqueness, since duplicate files
|
||||||
// will still be restored
|
// will still be restored
|
||||||
@@ -276,23 +260,13 @@ func makeFileIDByContents(node *restic.Node) fileID {
|
|||||||
|
|
||||||
func verifyStatsInput(gopts GlobalOptions, args []string) error {
|
func verifyStatsInput(gopts GlobalOptions, args []string) error {
|
||||||
// require a recognized counting mode
|
// require a recognized counting mode
|
||||||
switch countMode {
|
switch statsOptions.countMode {
|
||||||
case countModeRestoreSize:
|
case countModeRestoreSize:
|
||||||
case countModeUniqueFilesByContents:
|
case countModeUniqueFilesByContents:
|
||||||
case countModeBlobsPerFile:
|
case countModeBlobsPerFile:
|
||||||
case countModeRawData:
|
case countModeRawData:
|
||||||
default:
|
default:
|
||||||
return fmt.Errorf("unknown counting mode: %s (use the -h flag to get a list of supported modes)", countMode)
|
return fmt.Errorf("unknown counting mode: %s (use the -h flag to get a list of supported modes)", statsOptions.countMode)
|
||||||
}
|
|
||||||
|
|
||||||
// ensure at most one snapshot was specified
|
|
||||||
if len(args) > 1 {
|
|
||||||
return fmt.Errorf("only one snapshot may be specified")
|
|
||||||
}
|
|
||||||
|
|
||||||
// if a snapshot was specified, mark it as the one to scan
|
|
||||||
if len(args) == 1 {
|
|
||||||
snapshotIDString = args[0]
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
@@ -318,26 +292,17 @@ type statsContainer struct {
|
|||||||
// blobs that have been seen as a part of the file
|
// blobs that have been seen as a part of the file
|
||||||
fileBlobs map[string]restic.IDSet
|
fileBlobs map[string]restic.IDSet
|
||||||
|
|
||||||
// blobs and blobsSeen are used to count individual
|
// blobs is used to count individual unique blobs,
|
||||||
// unique blobs, independent of references to files
|
// independent of references to files
|
||||||
blobs, blobsSeen restic.BlobSet
|
blobs restic.BlobSet
|
||||||
|
|
||||||
|
// holds count of all considered snapshots
|
||||||
|
snapshotsCount int
|
||||||
}
|
}
|
||||||
|
|
||||||
// fileID is a 256-bit hash that distinguishes unique files.
|
// fileID is a 256-bit hash that distinguishes unique files.
|
||||||
type fileID [32]byte
|
type fileID [32]byte
|
||||||
|
|
||||||
var (
|
|
||||||
// the mode of counting to perform
|
|
||||||
countMode string
|
|
||||||
|
|
||||||
// the snapshot to scan, as given by the user
|
|
||||||
snapshotIDString string
|
|
||||||
|
|
||||||
// snapshotByHost is the host to filter latest
|
|
||||||
// snapshot by, if given by user
|
|
||||||
snapshotByHosts []string
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
const (
|
||||||
countModeRestoreSize = "restore-size"
|
countModeRestoreSize = "restore-size"
|
||||||
countModeUniqueFilesByContents = "files-by-contents"
|
countModeUniqueFilesByContents = "files-by-contents"
|
||||||
|
|||||||
62
cmd/restic/delete.go
Normal file
62
cmd/restic/delete.go
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"golang.org/x/sync/errgroup"
|
||||||
|
|
||||||
|
"github.com/restic/restic/internal/restic"
|
||||||
|
)
|
||||||
|
|
||||||
|
// DeleteFiles deletes the given fileList of fileType in parallel
|
||||||
|
// it will print a warning if there is an error, but continue deleting the remaining files
|
||||||
|
func DeleteFiles(gopts GlobalOptions, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) {
|
||||||
|
deleteFiles(gopts, true, repo, fileList, fileType)
|
||||||
|
}
|
||||||
|
|
||||||
|
// DeleteFilesChecked deletes the given fileList of fileType in parallel
|
||||||
|
// if an error occurs, it will cancel and return this error
|
||||||
|
func DeleteFilesChecked(gopts GlobalOptions, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) error {
|
||||||
|
return deleteFiles(gopts, false, repo, fileList, fileType)
|
||||||
|
}
|
||||||
|
|
||||||
|
const numDeleteWorkers = 8
|
||||||
|
|
||||||
|
// deleteFiles deletes the given fileList of fileType in parallel
|
||||||
|
// if ignoreError=true, it will print a warning if there was an error, else it will abort.
|
||||||
|
func deleteFiles(gopts GlobalOptions, ignoreError bool, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) error {
|
||||||
|
totalCount := len(fileList)
|
||||||
|
fileChan := make(chan restic.ID)
|
||||||
|
go func() {
|
||||||
|
for id := range fileList {
|
||||||
|
fileChan <- id
|
||||||
|
}
|
||||||
|
close(fileChan)
|
||||||
|
}()
|
||||||
|
|
||||||
|
bar := newProgressMax(!gopts.JSON && !gopts.Quiet, uint64(totalCount), "files deleted")
|
||||||
|
wg, ctx := errgroup.WithContext(gopts.ctx)
|
||||||
|
bar.Start()
|
||||||
|
for i := 0; i < numDeleteWorkers; i++ {
|
||||||
|
wg.Go(func() error {
|
||||||
|
for id := range fileChan {
|
||||||
|
h := restic.Handle{Type: fileType, Name: id.String()}
|
||||||
|
err := repo.Backend().Remove(ctx, h)
|
||||||
|
if err != nil {
|
||||||
|
if !gopts.JSON {
|
||||||
|
Warnf("unable to remove %v from the repository\n", h)
|
||||||
|
}
|
||||||
|
if !ignoreError {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !gopts.JSON && gopts.verbosity >= 2 {
|
||||||
|
Verbosef("removed %v\n", h)
|
||||||
|
}
|
||||||
|
bar.Report(restic.Stat{Blobs: 1})
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
err := wg.Wait()
|
||||||
|
bar.Done()
|
||||||
|
return err
|
||||||
|
}
|
||||||
@@ -6,6 +6,7 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
@@ -131,7 +132,7 @@ func rejectIfPresent(excludeFileSpec string) (RejectByNameFunc, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// isExcludedByFile interprets filename as a path and returns true if that file
|
// isExcludedByFile interprets filename as a path and returns true if that file
|
||||||
// is in a excluded directory. A directory is identified as excluded if it contains a
|
// is in an excluded directory. A directory is identified as excluded if it contains a
|
||||||
// tagfile which bears the name specified in tagFilename and starts with
|
// tagfile which bears the name specified in tagFilename and starts with
|
||||||
// header. If rc is non-nil, it is used to expedite the evaluation of a
|
// header. If rc is non-nil, it is used to expedite the evaluation of a
|
||||||
// directory based on previous visits.
|
// directory based on previous visits.
|
||||||
@@ -190,7 +191,7 @@ func isDirExcludedByFile(dir, tagFilename, header string) bool {
|
|||||||
Warnf("could not read signature from exclusion tagfile %q: %v\n", tf, err)
|
Warnf("could not read signature from exclusion tagfile %q: %v\n", tf, err)
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
if bytes.Compare(buf, []byte(header)) != 0 {
|
if !bytes.Equal(buf, []byte(header)) {
|
||||||
Warnf("invalid signature in exclusion tagfile %q\n", tf)
|
Warnf("invalid signature in exclusion tagfile %q\n", tf)
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
@@ -292,3 +293,50 @@ func rejectResticCache(repo *repository.Repository) (RejectByNameFunc, error) {
|
|||||||
return false
|
return false
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func rejectBySize(maxSizeStr string) (RejectFunc, error) {
|
||||||
|
maxSize, err := parseSizeStr(maxSizeStr)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return func(item string, fi os.FileInfo) bool {
|
||||||
|
// directory will be ignored
|
||||||
|
if fi.IsDir() {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
filesize := fi.Size()
|
||||||
|
if filesize > maxSize {
|
||||||
|
debug.Log("file %s is oversize: %d", item, filesize)
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseSizeStr(sizeStr string) (int64, error) {
|
||||||
|
numStr := sizeStr[:len(sizeStr)-1]
|
||||||
|
var unit int64 = 1
|
||||||
|
|
||||||
|
switch sizeStr[len(sizeStr)-1] {
|
||||||
|
case 'b', 'B':
|
||||||
|
// use initialized values, do nothing here
|
||||||
|
case 'k', 'K':
|
||||||
|
unit = 1024
|
||||||
|
case 'm', 'M':
|
||||||
|
unit = 1024 * 1024
|
||||||
|
case 'g', 'G':
|
||||||
|
unit = 1024 * 1024 * 1024
|
||||||
|
case 't', 'T':
|
||||||
|
unit = 1024 * 1024 * 1024 * 1024
|
||||||
|
default:
|
||||||
|
numStr = sizeStr
|
||||||
|
}
|
||||||
|
value, err := strconv.ParseInt(numStr, 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
return value * unit, nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -189,3 +189,113 @@ func TestMultipleIsExcludedByFile(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestParseSizeStr(t *testing.T) {
|
||||||
|
sizeStrTests := []struct {
|
||||||
|
in string
|
||||||
|
expected int64
|
||||||
|
}{
|
||||||
|
{"1024", 1024},
|
||||||
|
{"1024b", 1024},
|
||||||
|
{"1024B", 1024},
|
||||||
|
{"1k", 1024},
|
||||||
|
{"100k", 102400},
|
||||||
|
{"100K", 102400},
|
||||||
|
{"10M", 10485760},
|
||||||
|
{"100m", 104857600},
|
||||||
|
{"20G", 21474836480},
|
||||||
|
{"10g", 10737418240},
|
||||||
|
{"2T", 2199023255552},
|
||||||
|
{"2t", 2199023255552},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range sizeStrTests {
|
||||||
|
actual, err := parseSizeStr(tt.in)
|
||||||
|
test.OK(t, err)
|
||||||
|
|
||||||
|
if actual != tt.expected {
|
||||||
|
t.Errorf("parseSizeStr(%s) = %d; expected %d", tt.in, actual, tt.expected)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TestIsExcludedByFileSize is for testing the instance of
|
||||||
|
// --exclude-larger-than parameters
|
||||||
|
func TestIsExcludedByFileSize(t *testing.T) {
|
||||||
|
tempDir, cleanup := test.TempDir(t)
|
||||||
|
defer cleanup()
|
||||||
|
|
||||||
|
// Max size of file is set to be 1k
|
||||||
|
maxSizeStr := "1k"
|
||||||
|
|
||||||
|
// Create some files in a temporary directory.
|
||||||
|
// Files in UPPERCASE will be used as exclusion triggers later on.
|
||||||
|
// We will test the inclusion later, so we add the expected value as
|
||||||
|
// a bool.
|
||||||
|
files := []struct {
|
||||||
|
path string
|
||||||
|
size int64
|
||||||
|
incl bool
|
||||||
|
}{
|
||||||
|
{"42", 100, true},
|
||||||
|
|
||||||
|
// everything in foodir except the FOOLARGE tagfile
|
||||||
|
// should not be included.
|
||||||
|
{"foodir/FOOLARGE", 2048, false},
|
||||||
|
{"foodir/foo", 1002, true},
|
||||||
|
{"foodir/foosub/underfoo", 100, true},
|
||||||
|
|
||||||
|
// everything in bardir except the BARLARGE tagfile
|
||||||
|
// should not be included.
|
||||||
|
{"bardir/BARLARGE", 1030, false},
|
||||||
|
{"bardir/bar", 1000, true},
|
||||||
|
{"bardir/barsub/underbar", 500, true},
|
||||||
|
|
||||||
|
// everything in bazdir should be included.
|
||||||
|
{"bazdir/baz", 100, true},
|
||||||
|
{"bazdir/bazsub/underbaz", 200, true},
|
||||||
|
}
|
||||||
|
var errs []error
|
||||||
|
for _, f := range files {
|
||||||
|
// create directories first, then the file
|
||||||
|
p := filepath.Join(tempDir, filepath.FromSlash(f.path))
|
||||||
|
errs = append(errs, os.MkdirAll(filepath.Dir(p), 0700))
|
||||||
|
file, err := os.OpenFile(p, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0600)
|
||||||
|
errs = append(errs, err)
|
||||||
|
if err == nil {
|
||||||
|
// create a file with given size
|
||||||
|
errs = append(errs, file.Truncate(f.size))
|
||||||
|
}
|
||||||
|
errs = append(errs, file.Close())
|
||||||
|
}
|
||||||
|
test.OKs(t, errs) // see if anything went wrong during the creation
|
||||||
|
|
||||||
|
// create rejection function
|
||||||
|
sizeExclude, _ := rejectBySize(maxSizeStr)
|
||||||
|
|
||||||
|
// To mock the archiver scanning walk, we create filepath.WalkFn
|
||||||
|
// that tests against the two rejection functions and stores
|
||||||
|
// the result in a map against we can test later.
|
||||||
|
m := make(map[string]bool)
|
||||||
|
walk := func(p string, fi os.FileInfo, err error) error {
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
excluded := sizeExclude(p, fi)
|
||||||
|
// the log message helps debugging in case the test fails
|
||||||
|
t.Logf("%q: dir:%t; size:%d; excluded:%v", p, fi.IsDir(), fi.Size(), excluded)
|
||||||
|
m[p] = !excluded
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
// walk through the temporary file and check the error
|
||||||
|
test.OK(t, filepath.Walk(tempDir, walk))
|
||||||
|
|
||||||
|
// compare whether the walk gave the expected values for the test cases
|
||||||
|
for _, f := range files {
|
||||||
|
p := filepath.Join(tempDir, filepath.FromSlash(f.path))
|
||||||
|
if m[p] != f.incl {
|
||||||
|
t.Errorf("inclusion status of %s is wrong: want %v, got %v", f.path, f.incl, m[p])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -22,10 +22,10 @@ func FindFilteredSnapshots(ctx context.Context, repo *repository.Repository, hos
|
|||||||
// Process all snapshot IDs given as arguments.
|
// Process all snapshot IDs given as arguments.
|
||||||
for _, s := range snapshotIDs {
|
for _, s := range snapshotIDs {
|
||||||
if s == "latest" {
|
if s == "latest" {
|
||||||
|
usedFilter = true
|
||||||
id, err = restic.FindLatestSnapshot(ctx, repo, paths, tags, hosts)
|
id, err = restic.FindLatestSnapshot(ctx, repo, paths, tags, hosts)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
Warnf("Ignoring %q, no snapshot matched given filter (Paths:%v Tags:%v Hosts:%v)\n", s, paths, tags, hosts)
|
Warnf("Ignoring %q, no snapshot matched given filter (Paths:%v Tags:%v Hosts:%v)\n", s, paths, tags, hosts)
|
||||||
usedFilter = true
|
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@@ -51,12 +51,6 @@ func formatPercent(numerator uint64, denominator uint64) string {
|
|||||||
return fmt.Sprintf("%3.2f%%", percent)
|
return fmt.Sprintf("%3.2f%%", percent)
|
||||||
}
|
}
|
||||||
|
|
||||||
func formatRate(bytes uint64, duration time.Duration) string {
|
|
||||||
sec := float64(duration) / float64(time.Second)
|
|
||||||
rate := float64(bytes) / sec / (1 << 20)
|
|
||||||
return fmt.Sprintf("%.2fMiB/s", rate)
|
|
||||||
}
|
|
||||||
|
|
||||||
func formatDuration(d time.Duration) string {
|
func formatDuration(d time.Duration) string {
|
||||||
sec := uint64(d / time.Second)
|
sec := uint64(d / time.Second)
|
||||||
return formatSeconds(sec)
|
return formatSeconds(sec)
|
||||||
|
|||||||
@@ -39,11 +39,13 @@ import (
|
|||||||
"golang.org/x/crypto/ssh/terminal"
|
"golang.org/x/crypto/ssh/terminal"
|
||||||
)
|
)
|
||||||
|
|
||||||
var version = "0.9.6-dev (compiled manually)"
|
var version = "0.10.0"
|
||||||
|
|
||||||
// TimeFormat is the format used for all timestamps printed by restic.
|
// TimeFormat is the format used for all timestamps printed by restic.
|
||||||
const TimeFormat = "2006-01-02 15:04:05"
|
const TimeFormat = "2006-01-02 15:04:05"
|
||||||
|
|
||||||
|
type backendWrapper func(r restic.Backend) (restic.Backend, error)
|
||||||
|
|
||||||
// GlobalOptions hold all global options for restic.
|
// GlobalOptions hold all global options for restic.
|
||||||
type GlobalOptions struct {
|
type GlobalOptions struct {
|
||||||
Repo string
|
Repo string
|
||||||
@@ -68,11 +70,13 @@ type GlobalOptions struct {
|
|||||||
stdout io.Writer
|
stdout io.Writer
|
||||||
stderr io.Writer
|
stderr io.Writer
|
||||||
|
|
||||||
|
backendTestHook backendWrapper
|
||||||
|
|
||||||
// verbosity is set as follows:
|
// verbosity is set as follows:
|
||||||
// 0 means: don't print any messages except errors, this is used when --quiet is specified
|
// 0 means: don't print any messages except errors, this is used when --quiet is specified
|
||||||
// 1 is the default: print essential messages
|
// 1 is the default: print essential messages
|
||||||
// 2 means: print more messages, report minor things, this is used when --verbose is specified
|
// 2 means: print more messages, report minor things, this is used when --verbose is specified
|
||||||
// 3 means: print very detailed debug messages, this is used when --verbose 2 is specified
|
// 3 means: print very detailed debug messages, this is used when --verbose=2 is specified
|
||||||
verbosity uint
|
verbosity uint
|
||||||
|
|
||||||
Options []string
|
Options []string
|
||||||
@@ -97,11 +101,11 @@ func init() {
|
|||||||
|
|
||||||
f := cmdRoot.PersistentFlags()
|
f := cmdRoot.PersistentFlags()
|
||||||
f.StringVarP(&globalOptions.Repo, "repo", "r", os.Getenv("RESTIC_REPOSITORY"), "`repository` to backup to or restore from (default: $RESTIC_REPOSITORY)")
|
f.StringVarP(&globalOptions.Repo, "repo", "r", os.Getenv("RESTIC_REPOSITORY"), "`repository` to backup to or restore from (default: $RESTIC_REPOSITORY)")
|
||||||
f.StringVarP(&globalOptions.PasswordFile, "password-file", "p", os.Getenv("RESTIC_PASSWORD_FILE"), "read the repository password from a `file` (default: $RESTIC_PASSWORD_FILE)")
|
f.StringVarP(&globalOptions.PasswordFile, "password-file", "p", os.Getenv("RESTIC_PASSWORD_FILE"), "`file` to read the repository password from (default: $RESTIC_PASSWORD_FILE)")
|
||||||
f.StringVarP(&globalOptions.KeyHint, "key-hint", "", os.Getenv("RESTIC_KEY_HINT"), "`key` ID of key to try decrypting first (default: $RESTIC_KEY_HINT)")
|
f.StringVarP(&globalOptions.KeyHint, "key-hint", "", os.Getenv("RESTIC_KEY_HINT"), "`key` ID of key to try decrypting first (default: $RESTIC_KEY_HINT)")
|
||||||
f.StringVarP(&globalOptions.PasswordCommand, "password-command", "", os.Getenv("RESTIC_PASSWORD_COMMAND"), "specify a shell `command` to obtain a password (default: $RESTIC_PASSWORD_COMMAND)")
|
f.StringVarP(&globalOptions.PasswordCommand, "password-command", "", os.Getenv("RESTIC_PASSWORD_COMMAND"), "shell `command` to obtain the repository password from (default: $RESTIC_PASSWORD_COMMAND)")
|
||||||
f.BoolVarP(&globalOptions.Quiet, "quiet", "q", false, "do not output comprehensive progress report")
|
f.BoolVarP(&globalOptions.Quiet, "quiet", "q", false, "do not output comprehensive progress report")
|
||||||
f.CountVarP(&globalOptions.Verbose, "verbose", "v", "be verbose (specify --verbose multiple times or level `n`)")
|
f.CountVarP(&globalOptions.Verbose, "verbose", "v", "be verbose (specify --verbose multiple times or level --verbose=`n`)")
|
||||||
f.BoolVar(&globalOptions.NoLock, "no-lock", false, "do not lock the repo, this allows some operations on read-only repos")
|
f.BoolVar(&globalOptions.NoLock, "no-lock", false, "do not lock the repo, this allows some operations on read-only repos")
|
||||||
f.BoolVarP(&globalOptions.JSON, "json", "", false, "set output mode to JSON for commands that support it")
|
f.BoolVarP(&globalOptions.JSON, "json", "", false, "set output mode to JSON for commands that support it")
|
||||||
f.StringVar(&globalOptions.CacheDir, "cache-dir", "", "set the cache `directory`. (default: use system default cache directory)")
|
f.StringVar(&globalOptions.CacheDir, "cache-dir", "", "set the cache `directory`. (default: use system default cache directory)")
|
||||||
@@ -270,7 +274,7 @@ func Exitf(exitcode int, format string, args ...interface{}) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// resolvePassword determines the password to be used for opening the repository.
|
// resolvePassword determines the password to be used for opening the repository.
|
||||||
func resolvePassword(opts GlobalOptions) (string, error) {
|
func resolvePassword(opts GlobalOptions, envStr string) (string, error) {
|
||||||
if opts.PasswordFile != "" && opts.PasswordCommand != "" {
|
if opts.PasswordFile != "" && opts.PasswordCommand != "" {
|
||||||
return "", errors.Fatalf("Password file and command are mutually exclusive options")
|
return "", errors.Fatalf("Password file and command are mutually exclusive options")
|
||||||
}
|
}
|
||||||
@@ -295,7 +299,7 @@ func resolvePassword(opts GlobalOptions) (string, error) {
|
|||||||
return strings.TrimSpace(string(s)), errors.Wrap(err, "Readfile")
|
return strings.TrimSpace(string(s)), errors.Wrap(err, "Readfile")
|
||||||
}
|
}
|
||||||
|
|
||||||
if pwd := os.Getenv("RESTIC_PASSWORD"); pwd != "" {
|
if pwd := os.Getenv(envStr); pwd != "" {
|
||||||
return pwd, nil
|
return pwd, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -395,6 +399,14 @@ func OpenRepository(opts GlobalOptions) (*repository.Repository, error) {
|
|||||||
Warnf("%v returned error, retrying after %v: %v\n", msg, d, err)
|
Warnf("%v returned error, retrying after %v: %v\n", msg, d, err)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
// wrap backend if a test specified a hook
|
||||||
|
if opts.backendTestHook != nil {
|
||||||
|
be, err = opts.backendTestHook(be)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
s := repository.New(be)
|
s := repository.New(be)
|
||||||
|
|
||||||
passwordTriesLeft := 1
|
passwordTriesLeft := 1
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
@@ -54,7 +55,7 @@ func walkDir(dir string) <-chan *dirEntry {
|
|||||||
}()
|
}()
|
||||||
|
|
||||||
// first element is root
|
// first element is root
|
||||||
_ = <-ch
|
<-ch
|
||||||
|
|
||||||
return ch
|
return ch
|
||||||
}
|
}
|
||||||
@@ -72,27 +73,16 @@ func sameModTime(fi1, fi2 os.FileInfo) bool {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
same := fi1.ModTime().Equal(fi2.ModTime())
|
return fi1.ModTime().Equal(fi2.ModTime())
|
||||||
if !same && (runtime.GOOS == "darwin" || runtime.GOOS == "openbsd") {
|
|
||||||
// Allow up to 1μs difference, because macOS <10.13 cannot restore
|
|
||||||
// with nanosecond precision and the current version of Go (1.9.2)
|
|
||||||
// does not yet support the new syscall. (#1087)
|
|
||||||
mt1 := fi1.ModTime()
|
|
||||||
mt2 := fi2.ModTime()
|
|
||||||
usecDiff := (mt1.Nanosecond()-mt2.Nanosecond())/1000 + (mt1.Second()-mt2.Second())*1000000
|
|
||||||
same = usecDiff <= 1 && usecDiff >= -1
|
|
||||||
}
|
|
||||||
return same
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// directoriesEqualContents checks if both directories contain exactly the same
|
// directoriesContentsDiff returns a diff between both directories. If these
|
||||||
// contents.
|
// contain exactly the same contents, then the diff is an empty string.
|
||||||
func directoriesEqualContents(dir1, dir2 string) bool {
|
func directoriesContentsDiff(dir1, dir2 string) string {
|
||||||
|
var out bytes.Buffer
|
||||||
ch1 := walkDir(dir1)
|
ch1 := walkDir(dir1)
|
||||||
ch2 := walkDir(dir2)
|
ch2 := walkDir(dir2)
|
||||||
|
|
||||||
changes := false
|
|
||||||
|
|
||||||
var a, b *dirEntry
|
var a, b *dirEntry
|
||||||
for {
|
for {
|
||||||
var ok bool
|
var ok bool
|
||||||
@@ -116,36 +106,27 @@ func directoriesEqualContents(dir1, dir2 string) bool {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if ch1 == nil {
|
if ch1 == nil {
|
||||||
fmt.Printf("+%v\n", b.path)
|
fmt.Fprintf(&out, "+%v\n", b.path)
|
||||||
changes = true
|
|
||||||
} else if ch2 == nil {
|
} else if ch2 == nil {
|
||||||
fmt.Printf("-%v\n", a.path)
|
fmt.Fprintf(&out, "-%v\n", a.path)
|
||||||
changes = true
|
} else if !a.equals(&out, b) {
|
||||||
} else if !a.equals(b) {
|
|
||||||
if a.path < b.path {
|
if a.path < b.path {
|
||||||
fmt.Printf("-%v\n", a.path)
|
fmt.Fprintf(&out, "-%v\n", a.path)
|
||||||
changes = true
|
|
||||||
a = nil
|
a = nil
|
||||||
continue
|
continue
|
||||||
} else if a.path > b.path {
|
} else if a.path > b.path {
|
||||||
fmt.Printf("+%v\n", b.path)
|
fmt.Fprintf(&out, "+%v\n", b.path)
|
||||||
changes = true
|
|
||||||
b = nil
|
b = nil
|
||||||
continue
|
continue
|
||||||
} else {
|
} else {
|
||||||
fmt.Printf("%%%v\n", a.path)
|
fmt.Fprintf(&out, "%%%v\n", a.path)
|
||||||
changes = true
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
a, b = nil, nil
|
a, b = nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
if changes {
|
return out.String()
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type dirStat struct {
|
type dirStat struct {
|
||||||
|
|||||||
@@ -4,25 +4,26 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"syscall"
|
"syscall"
|
||||||
)
|
)
|
||||||
|
|
||||||
func (e *dirEntry) equals(other *dirEntry) bool {
|
func (e *dirEntry) equals(out io.Writer, other *dirEntry) bool {
|
||||||
if e.path != other.path {
|
if e.path != other.path {
|
||||||
fmt.Fprintf(os.Stderr, "%v: path does not match (%v != %v)\n", e.path, e.path, other.path)
|
fmt.Fprintf(out, "%v: path does not match (%v != %v)\n", e.path, e.path, other.path)
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
if e.fi.Mode() != other.fi.Mode() {
|
if e.fi.Mode() != other.fi.Mode() {
|
||||||
fmt.Fprintf(os.Stderr, "%v: mode does not match (%v != %v)\n", e.path, e.fi.Mode(), other.fi.Mode())
|
fmt.Fprintf(out, "%v: mode does not match (%v != %v)\n", e.path, e.fi.Mode(), other.fi.Mode())
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
if !sameModTime(e.fi, other.fi) {
|
if !sameModTime(e.fi, other.fi) {
|
||||||
fmt.Fprintf(os.Stderr, "%v: ModTime does not match (%v != %v)\n", e.path, e.fi.ModTime(), other.fi.ModTime())
|
fmt.Fprintf(out, "%v: ModTime does not match (%v != %v)\n", e.path, e.fi.ModTime(), other.fi.ModTime())
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -30,17 +31,17 @@ func (e *dirEntry) equals(other *dirEntry) bool {
|
|||||||
stat2, _ := other.fi.Sys().(*syscall.Stat_t)
|
stat2, _ := other.fi.Sys().(*syscall.Stat_t)
|
||||||
|
|
||||||
if stat.Uid != stat2.Uid {
|
if stat.Uid != stat2.Uid {
|
||||||
fmt.Fprintf(os.Stderr, "%v: UID does not match (%v != %v)\n", e.path, stat.Uid, stat2.Uid)
|
fmt.Fprintf(out, "%v: UID does not match (%v != %v)\n", e.path, stat.Uid, stat2.Uid)
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
if stat.Gid != stat2.Gid {
|
if stat.Gid != stat2.Gid {
|
||||||
fmt.Fprintf(os.Stderr, "%v: GID does not match (%v != %v)\n", e.path, stat.Gid, stat2.Gid)
|
fmt.Fprintf(out, "%v: GID does not match (%v != %v)\n", e.path, stat.Gid, stat2.Gid)
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
if stat.Nlink != stat2.Nlink {
|
if stat.Nlink != stat2.Nlink {
|
||||||
fmt.Fprintf(os.Stderr, "%v: Number of links do not match (%v != %v)\n", e.path, stat.Nlink, stat2.Nlink)
|
fmt.Fprintf(out, "%v: Number of links do not match (%v != %v)\n", e.path, stat.Nlink, stat2.Nlink)
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -4,23 +4,24 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"os"
|
"os"
|
||||||
)
|
)
|
||||||
|
|
||||||
func (e *dirEntry) equals(other *dirEntry) bool {
|
func (e *dirEntry) equals(out io.Writer, other *dirEntry) bool {
|
||||||
if e.path != other.path {
|
if e.path != other.path {
|
||||||
fmt.Fprintf(os.Stderr, "%v: path does not match (%v != %v)\n", e.path, e.path, other.path)
|
fmt.Fprintf(out, "%v: path does not match (%v != %v)\n", e.path, e.path, other.path)
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
if e.fi.Mode() != other.fi.Mode() {
|
if e.fi.Mode() != other.fi.Mode() {
|
||||||
fmt.Fprintf(os.Stderr, "%v: mode does not match (%v != %v)\n", e.path, e.fi.Mode(), other.fi.Mode())
|
fmt.Fprintf(out, "%v: mode does not match (%v != %v)\n", e.path, e.fi.Mode(), other.fi.Mode())
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
if !sameModTime(e.fi, other.fi) {
|
if !sameModTime(e.fi, other.fi) {
|
||||||
fmt.Fprintf(os.Stderr, "%v: ModTime does not match (%v != %v)\n", e.path, e.fi.ModTime(), other.fi.ModTime())
|
fmt.Fprintf(out, "%v: ModTime does not match (%v != %v)\n", e.path, e.fi.ModTime(), other.fi.ModTime())
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -13,6 +13,7 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"regexp"
|
"regexp"
|
||||||
|
"runtime"
|
||||||
"strings"
|
"strings"
|
||||||
"syscall"
|
"syscall"
|
||||||
"testing"
|
"testing"
|
||||||
@@ -50,11 +51,11 @@ func testRunInit(t testing.TB, opts GlobalOptions) {
|
|||||||
restic.TestDisableCheckPolynomial(t)
|
restic.TestDisableCheckPolynomial(t)
|
||||||
restic.TestSetLockTimeout(t, 0)
|
restic.TestSetLockTimeout(t, 0)
|
||||||
|
|
||||||
rtest.OK(t, runInit(opts, nil))
|
rtest.OK(t, runInit(InitOptions{}, opts, nil))
|
||||||
t.Logf("repository initialized at %v", opts.Repo)
|
t.Logf("repository initialized at %v", opts.Repo)
|
||||||
}
|
}
|
||||||
|
|
||||||
func testRunBackup(t testing.TB, dir string, target []string, opts BackupOptions, gopts GlobalOptions) {
|
func testRunBackupAssumeFailure(t testing.TB, dir string, target []string, opts BackupOptions, gopts GlobalOptions) error {
|
||||||
ctx, cancel := context.WithCancel(gopts.ctx)
|
ctx, cancel := context.WithCancel(gopts.ctx)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
|
|
||||||
@@ -69,7 +70,7 @@ func testRunBackup(t testing.TB, dir string, target []string, opts BackupOptions
|
|||||||
defer cleanup()
|
defer cleanup()
|
||||||
}
|
}
|
||||||
|
|
||||||
rtest.OK(t, runBackup(opts, gopts, term, target))
|
backupErr := runBackup(opts, gopts, term, target)
|
||||||
|
|
||||||
cancel()
|
cancel()
|
||||||
|
|
||||||
@@ -77,6 +78,13 @@ func testRunBackup(t testing.TB, dir string, target []string, opts BackupOptions
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return backupErr
|
||||||
|
}
|
||||||
|
|
||||||
|
func testRunBackup(t testing.TB, dir string, target []string, opts BackupOptions, gopts GlobalOptions) {
|
||||||
|
err := testRunBackupAssumeFailure(t, dir, target, opts, gopts)
|
||||||
|
rtest.Assert(t, err == nil, "Error while backing up")
|
||||||
}
|
}
|
||||||
|
|
||||||
func testRunList(t testing.TB, tpe string, opts GlobalOptions) restic.IDs {
|
func testRunList(t testing.TB, tpe string, opts GlobalOptions) restic.IDs {
|
||||||
@@ -143,6 +151,21 @@ func testRunCheckOutput(gopts GlobalOptions) (string, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
err := runCheck(opts, gopts, nil)
|
err := runCheck(opts, gopts, nil)
|
||||||
|
return buf.String(), err
|
||||||
|
}
|
||||||
|
|
||||||
|
func testRunDiffOutput(gopts GlobalOptions, firstSnapshotID string, secondSnapshotID string) (string, error) {
|
||||||
|
buf := bytes.NewBuffer(nil)
|
||||||
|
|
||||||
|
globalOptions.stdout = buf
|
||||||
|
defer func() {
|
||||||
|
globalOptions.stdout = os.Stdout
|
||||||
|
}()
|
||||||
|
|
||||||
|
opts := DiffOptions{
|
||||||
|
ShowMetadata: false,
|
||||||
|
}
|
||||||
|
err := runDiff(opts, gopts, []string{firstSnapshotID, secondSnapshotID})
|
||||||
return string(buf.Bytes()), err
|
return string(buf.Bytes()), err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -169,7 +192,7 @@ func testRunLs(t testing.TB, gopts GlobalOptions, snapshotID string) []string {
|
|||||||
|
|
||||||
rtest.OK(t, runLs(opts, gopts, []string{snapshotID}))
|
rtest.OK(t, runLs(opts, gopts, []string{snapshotID}))
|
||||||
|
|
||||||
return strings.Split(string(buf.Bytes()), "\n")
|
return strings.Split(buf.String(), "\n")
|
||||||
}
|
}
|
||||||
|
|
||||||
func testRunFind(t testing.TB, wantJSON bool, gopts GlobalOptions, pattern string) []byte {
|
func testRunFind(t testing.TB, wantJSON bool, gopts GlobalOptions, pattern string) []byte {
|
||||||
@@ -245,29 +268,24 @@ func testRunForgetJSON(t testing.TB, gopts GlobalOptions, args ...string) {
|
|||||||
"Expected 1 snapshot to be kept, got %v", len(forgets[0].Keep))
|
"Expected 1 snapshot to be kept, got %v", len(forgets[0].Keep))
|
||||||
rtest.Assert(t, len(forgets[0].Remove) == 2,
|
rtest.Assert(t, len(forgets[0].Remove) == 2,
|
||||||
"Expected 2 snapshots to be removed, got %v", len(forgets[0].Remove))
|
"Expected 2 snapshots to be removed, got %v", len(forgets[0].Remove))
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func testRunPrune(t testing.TB, gopts GlobalOptions) {
|
func testRunPrune(t testing.TB, gopts GlobalOptions) {
|
||||||
rtest.OK(t, runPrune(gopts))
|
rtest.OK(t, runPrune(gopts))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func testSetupBackupData(t testing.TB, env *testEnvironment) string {
|
||||||
|
datafile := filepath.Join("testdata", "backup-data.tar.gz")
|
||||||
|
testRunInit(t, env.gopts)
|
||||||
|
rtest.SetupTarTestFixture(t, env.testdata, datafile)
|
||||||
|
return datafile
|
||||||
|
}
|
||||||
|
|
||||||
func TestBackup(t *testing.T) {
|
func TestBackup(t *testing.T) {
|
||||||
env, cleanup := withTestEnvironment(t)
|
env, cleanup := withTestEnvironment(t)
|
||||||
defer cleanup()
|
defer cleanup()
|
||||||
|
|
||||||
datafile := filepath.Join("testdata", "backup-data.tar.gz")
|
testSetupBackupData(t, env)
|
||||||
fd, err := os.Open(datafile)
|
|
||||||
if os.IsNotExist(errors.Cause(err)) {
|
|
||||||
t.Skipf("unable to find data file %q, skipping", datafile)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
rtest.OK(t, err)
|
|
||||||
rtest.OK(t, fd.Close())
|
|
||||||
|
|
||||||
testRunInit(t, env.gopts)
|
|
||||||
|
|
||||||
rtest.SetupTarTestFixture(t, env.testdata, datafile)
|
|
||||||
opts := BackupOptions{}
|
opts := BackupOptions{}
|
||||||
|
|
||||||
// first backup
|
// first backup
|
||||||
@@ -309,9 +327,9 @@ func TestBackup(t *testing.T) {
|
|||||||
for i, snapshotID := range snapshotIDs {
|
for i, snapshotID := range snapshotIDs {
|
||||||
restoredir := filepath.Join(env.base, fmt.Sprintf("restore%d", i))
|
restoredir := filepath.Join(env.base, fmt.Sprintf("restore%d", i))
|
||||||
t.Logf("restoring snapshot %v to %v", snapshotID.Str(), restoredir)
|
t.Logf("restoring snapshot %v to %v", snapshotID.Str(), restoredir)
|
||||||
testRunRestore(t, env.gopts, restoredir, snapshotIDs[0])
|
testRunRestore(t, env.gopts, restoredir, snapshotID)
|
||||||
rtest.Assert(t, directoriesEqualContents(env.testdata, filepath.Join(restoredir, "testdata")),
|
diff := directoriesContentsDiff(env.testdata, filepath.Join(restoredir, "testdata"))
|
||||||
"directories are not equal")
|
rtest.Assert(t, diff == "", "directories are not equal: %v", diff)
|
||||||
}
|
}
|
||||||
|
|
||||||
testRunCheck(t, env.gopts)
|
testRunCheck(t, env.gopts)
|
||||||
@@ -321,18 +339,7 @@ func TestBackupNonExistingFile(t *testing.T) {
|
|||||||
env, cleanup := withTestEnvironment(t)
|
env, cleanup := withTestEnvironment(t)
|
||||||
defer cleanup()
|
defer cleanup()
|
||||||
|
|
||||||
datafile := filepath.Join("testdata", "backup-data.tar.gz")
|
testSetupBackupData(t, env)
|
||||||
fd, err := os.Open(datafile)
|
|
||||||
if os.IsNotExist(errors.Cause(err)) {
|
|
||||||
t.Skipf("unable to find data file %q, skipping", datafile)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
rtest.OK(t, err)
|
|
||||||
rtest.OK(t, fd.Close())
|
|
||||||
|
|
||||||
rtest.SetupTarTestFixture(t, env.testdata, datafile)
|
|
||||||
|
|
||||||
testRunInit(t, env.gopts)
|
|
||||||
globalOptions.stderr = ioutil.Discard
|
globalOptions.stderr = ioutil.Discard
|
||||||
defer func() {
|
defer func() {
|
||||||
globalOptions.stderr = os.Stderr
|
globalOptions.stderr = os.Stderr
|
||||||
@@ -351,6 +358,58 @@ func TestBackupNonExistingFile(t *testing.T) {
|
|||||||
testRunBackup(t, "", dirs, opts, env.gopts)
|
testRunBackup(t, "", dirs, opts, env.gopts)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func removeDataPacksExcept(gopts GlobalOptions, t *testing.T, keep restic.IDSet) {
|
||||||
|
r, err := OpenRepository(gopts)
|
||||||
|
rtest.OK(t, err)
|
||||||
|
|
||||||
|
// Get all tree packs
|
||||||
|
rtest.OK(t, r.LoadIndex(gopts.ctx))
|
||||||
|
treePacks := restic.NewIDSet()
|
||||||
|
for _, idx := range r.Index().(*repository.MasterIndex).All() {
|
||||||
|
for _, id := range idx.TreePacks() {
|
||||||
|
treePacks.Insert(id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// remove all packs containing data blobs
|
||||||
|
rtest.OK(t, r.List(gopts.ctx, restic.PackFile, func(id restic.ID, size int64) error {
|
||||||
|
if treePacks.Has(id) || keep.Has(id) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return r.Backend().Remove(gopts.ctx, restic.Handle{Type: restic.PackFile, Name: id.String()})
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBackupSelfHealing(t *testing.T) {
|
||||||
|
env, cleanup := withTestEnvironment(t)
|
||||||
|
defer cleanup()
|
||||||
|
|
||||||
|
testRunInit(t, env.gopts)
|
||||||
|
|
||||||
|
p := filepath.Join(env.testdata, "test/test")
|
||||||
|
rtest.OK(t, os.MkdirAll(filepath.Dir(p), 0755))
|
||||||
|
rtest.OK(t, appendRandomData(p, 5))
|
||||||
|
|
||||||
|
opts := BackupOptions{}
|
||||||
|
|
||||||
|
testRunBackup(t, filepath.Dir(env.testdata), []string{filepath.Base(env.testdata)}, opts, env.gopts)
|
||||||
|
testRunCheck(t, env.gopts)
|
||||||
|
|
||||||
|
// remove all data packs
|
||||||
|
removeDataPacksExcept(env.gopts, t, restic.NewIDSet())
|
||||||
|
|
||||||
|
testRunRebuildIndex(t, env.gopts)
|
||||||
|
// now the repo is also missing the data blob in the index; check should report this
|
||||||
|
rtest.Assert(t, runCheck(CheckOptions{}, env.gopts, nil) != nil,
|
||||||
|
"check should have reported an error")
|
||||||
|
|
||||||
|
// second backup should report an error but "heal" this situation
|
||||||
|
err := testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{filepath.Base(env.testdata)}, opts, env.gopts)
|
||||||
|
rtest.Assert(t, err != nil,
|
||||||
|
"backup should have reported an error")
|
||||||
|
testRunCheck(t, env.gopts)
|
||||||
|
}
|
||||||
|
|
||||||
func includes(haystack []string, needle string) bool {
|
func includes(haystack []string, needle string) bool {
|
||||||
for _, s := range haystack {
|
for _, s := range haystack {
|
||||||
if s == needle {
|
if s == needle {
|
||||||
@@ -405,7 +464,7 @@ func TestBackupExclude(t *testing.T) {
|
|||||||
f, err := os.Create(fp)
|
f, err := os.Create(fp)
|
||||||
rtest.OK(t, err)
|
rtest.OK(t, err)
|
||||||
|
|
||||||
fmt.Fprintf(f, filename)
|
fmt.Fprint(f, filename)
|
||||||
rtest.OK(t, f.Close())
|
rtest.OK(t, f.Close())
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -436,6 +495,32 @@ func TestBackupExclude(t *testing.T) {
|
|||||||
"expected file %q not in first snapshot, but it's included", "passwords.txt")
|
"expected file %q not in first snapshot, but it's included", "passwords.txt")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestBackupErrors(t *testing.T) {
|
||||||
|
if runtime.GOOS == "windows" {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
env, cleanup := withTestEnvironment(t)
|
||||||
|
defer cleanup()
|
||||||
|
|
||||||
|
testSetupBackupData(t, env)
|
||||||
|
|
||||||
|
// Assume failure
|
||||||
|
inaccessibleFile := filepath.Join(env.testdata, "0", "0", "9", "0")
|
||||||
|
os.Chmod(inaccessibleFile, 0000)
|
||||||
|
defer func() {
|
||||||
|
os.Chmod(inaccessibleFile, 0644)
|
||||||
|
}()
|
||||||
|
opts := BackupOptions{}
|
||||||
|
gopts := env.gopts
|
||||||
|
gopts.stderr = ioutil.Discard
|
||||||
|
err := testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, opts, gopts)
|
||||||
|
rtest.Assert(t, err != nil, "Assumed failure, but no error occured.")
|
||||||
|
rtest.Assert(t, err == ErrInvalidSourceData, "Wrong error returned")
|
||||||
|
snapshotIDs := testRunList(t, "snapshots", env.gopts)
|
||||||
|
rtest.Assert(t, len(snapshotIDs) == 1,
|
||||||
|
"expected one snapshot, got %v", snapshotIDs)
|
||||||
|
}
|
||||||
|
|
||||||
const (
|
const (
|
||||||
incrementalFirstWrite = 10 * 1042 * 1024
|
incrementalFirstWrite = 10 * 1042 * 1024
|
||||||
incrementalSecondWrite = 1 * 1042 * 1024
|
incrementalSecondWrite = 1 * 1042 * 1024
|
||||||
@@ -506,10 +591,7 @@ func TestBackupTags(t *testing.T) {
|
|||||||
env, cleanup := withTestEnvironment(t)
|
env, cleanup := withTestEnvironment(t)
|
||||||
defer cleanup()
|
defer cleanup()
|
||||||
|
|
||||||
datafile := filepath.Join("testdata", "backup-data.tar.gz")
|
testSetupBackupData(t, env)
|
||||||
testRunInit(t, env.gopts)
|
|
||||||
rtest.SetupTarTestFixture(t, env.testdata, datafile)
|
|
||||||
|
|
||||||
opts := BackupOptions{}
|
opts := BackupOptions{}
|
||||||
|
|
||||||
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
|
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
|
||||||
@@ -532,6 +614,153 @@ func TestBackupTags(t *testing.T) {
|
|||||||
"expected parent to be %v, got %v", parent.ID, newest.Parent)
|
"expected parent to be %v, got %v", parent.ID, newest.Parent)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func testRunCopy(t testing.TB, srcGopts GlobalOptions, dstGopts GlobalOptions) {
|
||||||
|
copyOpts := CopyOptions{
|
||||||
|
secondaryRepoOptions: secondaryRepoOptions{
|
||||||
|
Repo: dstGopts.Repo,
|
||||||
|
password: dstGopts.password,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
rtest.OK(t, runCopy(copyOpts, srcGopts, nil))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCopy(t *testing.T) {
|
||||||
|
env, cleanup := withTestEnvironment(t)
|
||||||
|
defer cleanup()
|
||||||
|
env2, cleanup2 := withTestEnvironment(t)
|
||||||
|
defer cleanup2()
|
||||||
|
|
||||||
|
testSetupBackupData(t, env)
|
||||||
|
opts := BackupOptions{}
|
||||||
|
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9")}, opts, env.gopts)
|
||||||
|
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9", "2")}, opts, env.gopts)
|
||||||
|
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9", "3")}, opts, env.gopts)
|
||||||
|
testRunCheck(t, env.gopts)
|
||||||
|
|
||||||
|
testRunInit(t, env2.gopts)
|
||||||
|
testRunCopy(t, env.gopts, env2.gopts)
|
||||||
|
|
||||||
|
snapshotIDs := testRunList(t, "snapshots", env.gopts)
|
||||||
|
copiedSnapshotIDs := testRunList(t, "snapshots", env2.gopts)
|
||||||
|
|
||||||
|
// Check that the copies size seems reasonable
|
||||||
|
rtest.Assert(t, len(snapshotIDs) == len(copiedSnapshotIDs), "expected %v snapshots, found %v",
|
||||||
|
len(snapshotIDs), len(copiedSnapshotIDs))
|
||||||
|
stat := dirStats(env.repo)
|
||||||
|
stat2 := dirStats(env2.repo)
|
||||||
|
sizeDiff := int64(stat.size) - int64(stat2.size)
|
||||||
|
if sizeDiff < 0 {
|
||||||
|
sizeDiff = -sizeDiff
|
||||||
|
}
|
||||||
|
rtest.Assert(t, sizeDiff < int64(stat.size)/50, "expected less than 2%% size difference: %v vs. %v",
|
||||||
|
stat.size, stat2.size)
|
||||||
|
|
||||||
|
// Check integrity of the copy
|
||||||
|
testRunCheck(t, env2.gopts)
|
||||||
|
|
||||||
|
// Check that the copied snapshots have the same tree contents as the old ones (= identical tree hash)
|
||||||
|
origRestores := make(map[string]struct{})
|
||||||
|
for i, snapshotID := range snapshotIDs {
|
||||||
|
restoredir := filepath.Join(env.base, fmt.Sprintf("restore%d", i))
|
||||||
|
origRestores[restoredir] = struct{}{}
|
||||||
|
testRunRestore(t, env.gopts, restoredir, snapshotID)
|
||||||
|
}
|
||||||
|
for i, snapshotID := range copiedSnapshotIDs {
|
||||||
|
restoredir := filepath.Join(env2.base, fmt.Sprintf("restore%d", i))
|
||||||
|
testRunRestore(t, env2.gopts, restoredir, snapshotID)
|
||||||
|
foundMatch := false
|
||||||
|
for cmpdir := range origRestores {
|
||||||
|
diff := directoriesContentsDiff(restoredir, cmpdir)
|
||||||
|
if diff == "" {
|
||||||
|
delete(origRestores, cmpdir)
|
||||||
|
foundMatch = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
rtest.Assert(t, foundMatch, "found no counterpart for snapshot %v", snapshotID)
|
||||||
|
}
|
||||||
|
|
||||||
|
rtest.Assert(t, len(origRestores) == 0, "found not copied snapshots")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCopyIncremental(t *testing.T) {
|
||||||
|
env, cleanup := withTestEnvironment(t)
|
||||||
|
defer cleanup()
|
||||||
|
env2, cleanup2 := withTestEnvironment(t)
|
||||||
|
defer cleanup2()
|
||||||
|
|
||||||
|
testSetupBackupData(t, env)
|
||||||
|
opts := BackupOptions{}
|
||||||
|
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9")}, opts, env.gopts)
|
||||||
|
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9", "2")}, opts, env.gopts)
|
||||||
|
testRunCheck(t, env.gopts)
|
||||||
|
|
||||||
|
testRunInit(t, env2.gopts)
|
||||||
|
testRunCopy(t, env.gopts, env2.gopts)
|
||||||
|
|
||||||
|
snapshotIDs := testRunList(t, "snapshots", env.gopts)
|
||||||
|
copiedSnapshotIDs := testRunList(t, "snapshots", env2.gopts)
|
||||||
|
|
||||||
|
// Check that the copies size seems reasonable
|
||||||
|
testRunCheck(t, env2.gopts)
|
||||||
|
rtest.Assert(t, len(snapshotIDs) == len(copiedSnapshotIDs), "expected %v snapshots, found %v",
|
||||||
|
len(snapshotIDs), len(copiedSnapshotIDs))
|
||||||
|
|
||||||
|
// check that no snapshots are copied, as there are no new ones
|
||||||
|
testRunCopy(t, env.gopts, env2.gopts)
|
||||||
|
testRunCheck(t, env2.gopts)
|
||||||
|
copiedSnapshotIDs = testRunList(t, "snapshots", env2.gopts)
|
||||||
|
rtest.Assert(t, len(snapshotIDs) == len(copiedSnapshotIDs), "still expected %v snapshots, found %v",
|
||||||
|
len(snapshotIDs), len(copiedSnapshotIDs))
|
||||||
|
|
||||||
|
// check that only new snapshots are copied
|
||||||
|
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9", "3")}, opts, env.gopts)
|
||||||
|
testRunCopy(t, env.gopts, env2.gopts)
|
||||||
|
testRunCheck(t, env2.gopts)
|
||||||
|
snapshotIDs = testRunList(t, "snapshots", env.gopts)
|
||||||
|
copiedSnapshotIDs = testRunList(t, "snapshots", env2.gopts)
|
||||||
|
rtest.Assert(t, len(snapshotIDs) == len(copiedSnapshotIDs), "still expected %v snapshots, found %v",
|
||||||
|
len(snapshotIDs), len(copiedSnapshotIDs))
|
||||||
|
|
||||||
|
// also test the reverse direction
|
||||||
|
testRunCopy(t, env2.gopts, env.gopts)
|
||||||
|
testRunCheck(t, env.gopts)
|
||||||
|
snapshotIDs = testRunList(t, "snapshots", env.gopts)
|
||||||
|
rtest.Assert(t, len(snapshotIDs) == len(copiedSnapshotIDs), "still expected %v snapshots, found %v",
|
||||||
|
len(copiedSnapshotIDs), len(snapshotIDs))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestInitCopyChunkerParams(t *testing.T) {
|
||||||
|
env, cleanup := withTestEnvironment(t)
|
||||||
|
defer cleanup()
|
||||||
|
env2, cleanup2 := withTestEnvironment(t)
|
||||||
|
defer cleanup2()
|
||||||
|
|
||||||
|
testRunInit(t, env2.gopts)
|
||||||
|
|
||||||
|
initOpts := InitOptions{
|
||||||
|
secondaryRepoOptions: secondaryRepoOptions{
|
||||||
|
Repo: env2.gopts.Repo,
|
||||||
|
password: env2.gopts.password,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
rtest.Assert(t, runInit(initOpts, env.gopts, nil) != nil, "expected invalid init options to fail")
|
||||||
|
|
||||||
|
initOpts.CopyChunkerParameters = true
|
||||||
|
rtest.OK(t, runInit(initOpts, env.gopts, nil))
|
||||||
|
|
||||||
|
repo, err := OpenRepository(env.gopts)
|
||||||
|
rtest.OK(t, err)
|
||||||
|
|
||||||
|
otherRepo, err := OpenRepository(env2.gopts)
|
||||||
|
rtest.OK(t, err)
|
||||||
|
|
||||||
|
rtest.Assert(t, repo.Config().ChunkerPolynomial == otherRepo.Config().ChunkerPolynomial,
|
||||||
|
"expected equal chunker polynomials, got %v expected %v", repo.Config().ChunkerPolynomial,
|
||||||
|
otherRepo.Config().ChunkerPolynomial)
|
||||||
|
}
|
||||||
|
|
||||||
func testRunTag(t testing.TB, opts TagOptions, gopts GlobalOptions) {
|
func testRunTag(t testing.TB, opts TagOptions, gopts GlobalOptions) {
|
||||||
rtest.OK(t, runTag(opts, gopts, []string{}))
|
rtest.OK(t, runTag(opts, gopts, []string{}))
|
||||||
}
|
}
|
||||||
@@ -540,10 +769,7 @@ func TestTag(t *testing.T) {
|
|||||||
env, cleanup := withTestEnvironment(t)
|
env, cleanup := withTestEnvironment(t)
|
||||||
defer cleanup()
|
defer cleanup()
|
||||||
|
|
||||||
datafile := filepath.Join("testdata", "backup-data.tar.gz")
|
testSetupBackupData(t, env)
|
||||||
testRunInit(t, env.gopts)
|
|
||||||
rtest.SetupTarTestFixture(t, env.testdata, datafile)
|
|
||||||
|
|
||||||
testRunBackup(t, "", []string{env.testdata}, BackupOptions{}, env.gopts)
|
testRunBackup(t, "", []string{env.testdata}, BackupOptions{}, env.gopts)
|
||||||
testRunCheck(t, env.gopts)
|
testRunCheck(t, env.gopts)
|
||||||
newest, _ := testRunSnapshots(t, env.gopts)
|
newest, _ := testRunSnapshots(t, env.gopts)
|
||||||
@@ -639,6 +865,28 @@ func testRunKeyAddNewKey(t testing.TB, newPassword string, gopts GlobalOptions)
|
|||||||
rtest.OK(t, runKey(gopts, []string{"add"}))
|
rtest.OK(t, runKey(gopts, []string{"add"}))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func testRunKeyAddNewKeyUserHost(t testing.TB, gopts GlobalOptions) {
|
||||||
|
testKeyNewPassword = "john's geheimnis"
|
||||||
|
defer func() {
|
||||||
|
testKeyNewPassword = ""
|
||||||
|
keyUsername = ""
|
||||||
|
keyHostname = ""
|
||||||
|
}()
|
||||||
|
|
||||||
|
cmdKey.Flags().Parse([]string{"--user=john", "--host=example.com"})
|
||||||
|
|
||||||
|
t.Log("adding key for john@example.com")
|
||||||
|
rtest.OK(t, runKey(gopts, []string{"add"}))
|
||||||
|
|
||||||
|
repo, err := OpenRepository(gopts)
|
||||||
|
rtest.OK(t, err)
|
||||||
|
key, err := repository.SearchKey(gopts.ctx, repo, testKeyNewPassword, 1, "")
|
||||||
|
rtest.OK(t, err)
|
||||||
|
|
||||||
|
rtest.Equals(t, "john", key.Username)
|
||||||
|
rtest.Equals(t, "example.com", key.Hostname)
|
||||||
|
}
|
||||||
|
|
||||||
func testRunKeyPasswd(t testing.TB, newPassword string, gopts GlobalOptions) {
|
func testRunKeyPasswd(t testing.TB, newPassword string, gopts GlobalOptions) {
|
||||||
testKeyNewPassword = newPassword
|
testKeyNewPassword = newPassword
|
||||||
defer func() {
|
defer func() {
|
||||||
@@ -681,6 +929,8 @@ func TestKeyAddRemove(t *testing.T) {
|
|||||||
t.Logf("testing access with last password %q\n", env.gopts.password)
|
t.Logf("testing access with last password %q\n", env.gopts.password)
|
||||||
rtest.OK(t, runKey(env.gopts, []string{"list"}))
|
rtest.OK(t, runKey(env.gopts, []string{"list"}))
|
||||||
testRunCheck(t, env.gopts)
|
testRunCheck(t, env.gopts)
|
||||||
|
|
||||||
|
testRunKeyAddNewKeyUserHost(t, env.gopts)
|
||||||
}
|
}
|
||||||
|
|
||||||
func testFileSize(filename string, size int64) error {
|
func testFileSize(filename string, size int64) error {
|
||||||
@@ -767,8 +1017,8 @@ func TestRestore(t *testing.T) {
|
|||||||
restoredir := filepath.Join(env.base, "restore")
|
restoredir := filepath.Join(env.base, "restore")
|
||||||
testRunRestoreLatest(t, env.gopts, restoredir, nil, nil)
|
testRunRestoreLatest(t, env.gopts, restoredir, nil, nil)
|
||||||
|
|
||||||
rtest.Assert(t, directoriesEqualContents(env.testdata, filepath.Join(restoredir, filepath.Base(env.testdata))),
|
diff := directoriesContentsDiff(env.testdata, filepath.Join(restoredir, filepath.Base(env.testdata)))
|
||||||
"directories are not equal")
|
rtest.Assert(t, diff == "", "directories are not equal %v", diff)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRestoreLatest(t *testing.T) {
|
func TestRestoreLatest(t *testing.T) {
|
||||||
@@ -901,14 +1151,14 @@ func TestRestoreNoMetadataOnIgnoredIntermediateDirs(t *testing.T) {
|
|||||||
testRunRestoreIncludes(t, env.gopts, filepath.Join(env.base, "restore0"), snapshotID, []string{"*.ext"})
|
testRunRestoreIncludes(t, env.gopts, filepath.Join(env.base, "restore0"), snapshotID, []string{"*.ext"})
|
||||||
|
|
||||||
f1 := filepath.Join(env.base, "restore0", "testdata", "subdir1", "subdir2")
|
f1 := filepath.Join(env.base, "restore0", "testdata", "subdir1", "subdir2")
|
||||||
fi, err := os.Stat(f1)
|
_, err := os.Stat(f1)
|
||||||
rtest.OK(t, err)
|
rtest.OK(t, err)
|
||||||
|
|
||||||
// restore with filter "*", this should restore meta data on everything.
|
// restore with filter "*", this should restore meta data on everything.
|
||||||
testRunRestoreIncludes(t, env.gopts, filepath.Join(env.base, "restore1"), snapshotID, []string{"*"})
|
testRunRestoreIncludes(t, env.gopts, filepath.Join(env.base, "restore1"), snapshotID, []string{"*"})
|
||||||
|
|
||||||
f2 := filepath.Join(env.base, "restore1", "testdata", "subdir1", "subdir2")
|
f2 := filepath.Join(env.base, "restore1", "testdata", "subdir1", "subdir2")
|
||||||
fi, err = os.Stat(f2)
|
fi, err := os.Stat(f2)
|
||||||
rtest.OK(t, err)
|
rtest.OK(t, err)
|
||||||
|
|
||||||
rtest.Assert(t, fi.ModTime() == time.Unix(0, 0),
|
rtest.Assert(t, fi.ModTime() == time.Unix(0, 0),
|
||||||
@@ -919,10 +1169,7 @@ func TestFind(t *testing.T) {
|
|||||||
env, cleanup := withTestEnvironment(t)
|
env, cleanup := withTestEnvironment(t)
|
||||||
defer cleanup()
|
defer cleanup()
|
||||||
|
|
||||||
datafile := filepath.Join("testdata", "backup-data.tar.gz")
|
datafile := testSetupBackupData(t, env)
|
||||||
testRunInit(t, env.gopts)
|
|
||||||
rtest.SetupTarTestFixture(t, env.testdata, datafile)
|
|
||||||
|
|
||||||
opts := BackupOptions{}
|
opts := BackupOptions{}
|
||||||
|
|
||||||
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
|
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
|
||||||
@@ -959,10 +1206,7 @@ func TestFindJSON(t *testing.T) {
|
|||||||
env, cleanup := withTestEnvironment(t)
|
env, cleanup := withTestEnvironment(t)
|
||||||
defer cleanup()
|
defer cleanup()
|
||||||
|
|
||||||
datafile := filepath.Join("testdata", "backup-data.tar.gz")
|
datafile := testSetupBackupData(t, env)
|
||||||
testRunInit(t, env.gopts)
|
|
||||||
rtest.SetupTarTestFixture(t, env.testdata, datafile)
|
|
||||||
|
|
||||||
opts := BackupOptions{}
|
opts := BackupOptions{}
|
||||||
|
|
||||||
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
|
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
|
||||||
@@ -1023,6 +1267,37 @@ func TestRebuildIndexAlwaysFull(t *testing.T) {
|
|||||||
TestRebuildIndex(t)
|
TestRebuildIndex(t)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type appendOnlyBackend struct {
|
||||||
|
restic.Backend
|
||||||
|
}
|
||||||
|
|
||||||
|
// called via repo.Backend().Remove()
|
||||||
|
func (b *appendOnlyBackend) Remove(ctx context.Context, h restic.Handle) error {
|
||||||
|
return errors.Errorf("Failed to remove %v", h)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRebuildIndexFailsOnAppendOnly(t *testing.T) {
|
||||||
|
env, cleanup := withTestEnvironment(t)
|
||||||
|
defer cleanup()
|
||||||
|
|
||||||
|
datafile := filepath.Join("..", "..", "internal", "checker", "testdata", "duplicate-packs-in-index-test-repo.tar.gz")
|
||||||
|
rtest.SetupTarTestFixture(t, env.base, datafile)
|
||||||
|
|
||||||
|
globalOptions.stdout = ioutil.Discard
|
||||||
|
defer func() {
|
||||||
|
globalOptions.stdout = os.Stdout
|
||||||
|
}()
|
||||||
|
|
||||||
|
env.gopts.backendTestHook = func(r restic.Backend) (restic.Backend, error) {
|
||||||
|
return &appendOnlyBackend{r}, nil
|
||||||
|
}
|
||||||
|
err := runRebuildIndex(env.gopts)
|
||||||
|
if err == nil {
|
||||||
|
t.Error("expected rebuildIndex to fail")
|
||||||
|
}
|
||||||
|
t.Log(err)
|
||||||
|
}
|
||||||
|
|
||||||
func TestCheckRestoreNoLock(t *testing.T) {
|
func TestCheckRestoreNoLock(t *testing.T) {
|
||||||
env, cleanup := withTestEnvironment(t)
|
env, cleanup := withTestEnvironment(t)
|
||||||
defer cleanup()
|
defer cleanup()
|
||||||
@@ -1054,18 +1329,7 @@ func TestPrune(t *testing.T) {
|
|||||||
env, cleanup := withTestEnvironment(t)
|
env, cleanup := withTestEnvironment(t)
|
||||||
defer cleanup()
|
defer cleanup()
|
||||||
|
|
||||||
datafile := filepath.Join("testdata", "backup-data.tar.gz")
|
testSetupBackupData(t, env)
|
||||||
fd, err := os.Open(datafile)
|
|
||||||
if os.IsNotExist(errors.Cause(err)) {
|
|
||||||
t.Skipf("unable to find data file %q, skipping", datafile)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
rtest.OK(t, err)
|
|
||||||
rtest.OK(t, fd.Close())
|
|
||||||
|
|
||||||
testRunInit(t, env.gopts)
|
|
||||||
|
|
||||||
rtest.SetupTarTestFixture(t, env.testdata, datafile)
|
|
||||||
opts := BackupOptions{}
|
opts := BackupOptions{}
|
||||||
|
|
||||||
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9")}, opts, env.gopts)
|
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9")}, opts, env.gopts)
|
||||||
@@ -1086,6 +1350,58 @@ func TestPrune(t *testing.T) {
|
|||||||
testRunCheck(t, env.gopts)
|
testRunCheck(t, env.gopts)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func listPacks(gopts GlobalOptions, t *testing.T) restic.IDSet {
|
||||||
|
r, err := OpenRepository(gopts)
|
||||||
|
rtest.OK(t, err)
|
||||||
|
|
||||||
|
packs := restic.NewIDSet()
|
||||||
|
|
||||||
|
rtest.OK(t, r.List(gopts.ctx, restic.PackFile, func(id restic.ID, size int64) error {
|
||||||
|
packs.Insert(id)
|
||||||
|
return nil
|
||||||
|
}))
|
||||||
|
return packs
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPruneWithDamagedRepository(t *testing.T) {
|
||||||
|
env, cleanup := withTestEnvironment(t)
|
||||||
|
defer cleanup()
|
||||||
|
|
||||||
|
datafile := filepath.Join("testdata", "backup-data.tar.gz")
|
||||||
|
testRunInit(t, env.gopts)
|
||||||
|
|
||||||
|
rtest.SetupTarTestFixture(t, env.testdata, datafile)
|
||||||
|
opts := BackupOptions{}
|
||||||
|
|
||||||
|
// create and delete snapshot to create unused blobs
|
||||||
|
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9", "2")}, opts, env.gopts)
|
||||||
|
firstSnapshot := testRunList(t, "snapshots", env.gopts)
|
||||||
|
rtest.Assert(t, len(firstSnapshot) == 1,
|
||||||
|
"expected one snapshot, got %v", firstSnapshot)
|
||||||
|
testRunForget(t, env.gopts, firstSnapshot[0].String())
|
||||||
|
|
||||||
|
oldPacks := listPacks(env.gopts, t)
|
||||||
|
|
||||||
|
// create new snapshot, but lose all data
|
||||||
|
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9", "3")}, opts, env.gopts)
|
||||||
|
snapshotIDs := testRunList(t, "snapshots", env.gopts)
|
||||||
|
|
||||||
|
removeDataPacksExcept(env.gopts, t, oldPacks)
|
||||||
|
|
||||||
|
rtest.Assert(t, len(snapshotIDs) == 1,
|
||||||
|
"expected one snapshot, got %v", snapshotIDs)
|
||||||
|
|
||||||
|
// prune should fail
|
||||||
|
err := runPrune(env.gopts)
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected prune to fail")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "blobs seem to be missing") {
|
||||||
|
t.Fatalf("did not find hint for missing blobs")
|
||||||
|
}
|
||||||
|
t.Log(err)
|
||||||
|
}
|
||||||
|
|
||||||
func TestHardLink(t *testing.T) {
|
func TestHardLink(t *testing.T) {
|
||||||
// this test assumes a test set with a single directory containing hard linked files
|
// this test assumes a test set with a single directory containing hard linked files
|
||||||
env, cleanup := withTestEnvironment(t)
|
env, cleanup := withTestEnvironment(t)
|
||||||
@@ -1120,9 +1436,9 @@ func TestHardLink(t *testing.T) {
|
|||||||
for i, snapshotID := range snapshotIDs {
|
for i, snapshotID := range snapshotIDs {
|
||||||
restoredir := filepath.Join(env.base, fmt.Sprintf("restore%d", i))
|
restoredir := filepath.Join(env.base, fmt.Sprintf("restore%d", i))
|
||||||
t.Logf("restoring snapshot %v to %v", snapshotID.Str(), restoredir)
|
t.Logf("restoring snapshot %v to %v", snapshotID.Str(), restoredir)
|
||||||
testRunRestore(t, env.gopts, restoredir, snapshotIDs[0])
|
testRunRestore(t, env.gopts, restoredir, snapshotID)
|
||||||
rtest.Assert(t, directoriesEqualContents(env.testdata, filepath.Join(restoredir, "testdata")),
|
diff := directoriesContentsDiff(env.testdata, filepath.Join(restoredir, "testdata"))
|
||||||
"directories are not equal")
|
rtest.Assert(t, diff == "", "directories are not equal %v", diff)
|
||||||
|
|
||||||
linkResults := createFileSetPerHardlink(filepath.Join(restoredir, "testdata"))
|
linkResults := createFileSetPerHardlink(filepath.Join(restoredir, "testdata"))
|
||||||
rtest.Assert(t, linksEqual(linkTests, linkResults),
|
rtest.Assert(t, linksEqual(linkTests, linkResults),
|
||||||
@@ -1147,11 +1463,7 @@ func linksEqual(source, dest map[uint64][]string) bool {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(dest) != 0 {
|
return len(dest) == 0
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func linkEqual(source, dest []string) bool {
|
func linkEqual(source, dest []string) bool {
|
||||||
@@ -1188,18 +1500,7 @@ func TestQuietBackup(t *testing.T) {
|
|||||||
env, cleanup := withTestEnvironment(t)
|
env, cleanup := withTestEnvironment(t)
|
||||||
defer cleanup()
|
defer cleanup()
|
||||||
|
|
||||||
datafile := filepath.Join("testdata", "backup-data.tar.gz")
|
testSetupBackupData(t, env)
|
||||||
fd, err := os.Open(datafile)
|
|
||||||
if os.IsNotExist(errors.Cause(err)) {
|
|
||||||
t.Skipf("unable to find data file %q, skipping", datafile)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
rtest.OK(t, err)
|
|
||||||
rtest.OK(t, fd.Close())
|
|
||||||
|
|
||||||
testRunInit(t, env.gopts)
|
|
||||||
|
|
||||||
rtest.SetupTarTestFixture(t, env.testdata, datafile)
|
|
||||||
opts := BackupOptions{}
|
opts := BackupOptions{}
|
||||||
|
|
||||||
env.gopts.Quiet = false
|
env.gopts.Quiet = false
|
||||||
@@ -1218,3 +1519,91 @@ func TestQuietBackup(t *testing.T) {
|
|||||||
|
|
||||||
testRunCheck(t, env.gopts)
|
testRunCheck(t, env.gopts)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func copyFile(dst string, src string) error {
|
||||||
|
srcFile, err := os.Open(src)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer srcFile.Close()
|
||||||
|
|
||||||
|
dstFile, err := os.Create(dst)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer dstFile.Close()
|
||||||
|
|
||||||
|
_, err = io.Copy(dstFile, srcFile)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
var diffOutputRegexPatterns = []string{
|
||||||
|
"-.+modfile",
|
||||||
|
"M.+modfile1",
|
||||||
|
"\\+.+modfile2",
|
||||||
|
"\\+.+modfile3",
|
||||||
|
"\\+.+modfile4",
|
||||||
|
"-.+submoddir",
|
||||||
|
"-.+submoddir.subsubmoddir",
|
||||||
|
"\\+.+submoddir2",
|
||||||
|
"\\+.+submoddir2.subsubmoddir",
|
||||||
|
"Files: +2 new, +1 removed, +1 changed",
|
||||||
|
"Dirs: +3 new, +2 removed",
|
||||||
|
"Data Blobs: +2 new, +1 removed",
|
||||||
|
"Added: +7[0-9]{2}\\.[0-9]{3} KiB",
|
||||||
|
"Removed: +2[0-9]{2}\\.[0-9]{3} KiB",
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDiff(t *testing.T) {
|
||||||
|
env, cleanup := withTestEnvironment(t)
|
||||||
|
defer cleanup()
|
||||||
|
|
||||||
|
testRunInit(t, env.gopts)
|
||||||
|
|
||||||
|
datadir := filepath.Join(env.base, "testdata")
|
||||||
|
testdir := filepath.Join(datadir, "testdir")
|
||||||
|
subtestdir := filepath.Join(testdir, "subtestdir")
|
||||||
|
testfile := filepath.Join(testdir, "testfile")
|
||||||
|
|
||||||
|
rtest.OK(t, os.Mkdir(testdir, 0755))
|
||||||
|
rtest.OK(t, os.Mkdir(subtestdir, 0755))
|
||||||
|
rtest.OK(t, appendRandomData(testfile, 256*1024))
|
||||||
|
|
||||||
|
moddir := filepath.Join(datadir, "moddir")
|
||||||
|
submoddir := filepath.Join(moddir, "submoddir")
|
||||||
|
subsubmoddir := filepath.Join(submoddir, "subsubmoddir")
|
||||||
|
modfile := filepath.Join(moddir, "modfile")
|
||||||
|
rtest.OK(t, os.Mkdir(moddir, 0755))
|
||||||
|
rtest.OK(t, os.Mkdir(submoddir, 0755))
|
||||||
|
rtest.OK(t, os.Mkdir(subsubmoddir, 0755))
|
||||||
|
rtest.OK(t, copyFile(modfile, testfile))
|
||||||
|
rtest.OK(t, appendRandomData(modfile+"1", 256*1024))
|
||||||
|
|
||||||
|
snapshots := make(map[string]struct{})
|
||||||
|
opts := BackupOptions{}
|
||||||
|
testRunBackup(t, "", []string{datadir}, opts, env.gopts)
|
||||||
|
snapshots, firstSnapshotID := lastSnapshot(snapshots, loadSnapshotMap(t, env.gopts))
|
||||||
|
|
||||||
|
rtest.OK(t, os.Rename(modfile, modfile+"3"))
|
||||||
|
rtest.OK(t, os.Rename(submoddir, submoddir+"2"))
|
||||||
|
rtest.OK(t, appendRandomData(modfile+"1", 256*1024))
|
||||||
|
rtest.OK(t, appendRandomData(modfile+"2", 256*1024))
|
||||||
|
rtest.OK(t, os.Mkdir(modfile+"4", 0755))
|
||||||
|
|
||||||
|
testRunBackup(t, "", []string{datadir}, opts, env.gopts)
|
||||||
|
snapshots, secondSnapshotID := lastSnapshot(snapshots, loadSnapshotMap(t, env.gopts))
|
||||||
|
|
||||||
|
_, err := testRunDiffOutput(env.gopts, "", secondSnapshotID)
|
||||||
|
rtest.Assert(t, err != nil, "expected error on invalid snapshot id")
|
||||||
|
|
||||||
|
out, err := testRunDiffOutput(env.gopts, firstSnapshotID, secondSnapshotID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("expected no error from diff for test repository, got %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, pattern := range diffOutputRegexPatterns {
|
||||||
|
r, err := regexp.Compile(pattern)
|
||||||
|
rtest.Assert(t, err == nil, "failed to compile regexp %v", pattern)
|
||||||
|
rtest.Assert(t, r.MatchString(out), "expected pattern %v in output, got\n%v", pattern, out)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -2,8 +2,6 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -36,7 +34,7 @@ func lockRepository(repo *repository.Repository, exclusive bool) (*restic.Lock,
|
|||||||
|
|
||||||
lock, err := lockFn(context.TODO(), repo)
|
lock, err := lockFn(context.TODO(), repo)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Fatalf("unable to create lock in backend: %v", err)
|
return nil, errors.WithMessage(err, "unable to create lock in backend")
|
||||||
}
|
}
|
||||||
debug.Log("create lock %p (exclusive %v)", lock, exclusive)
|
debug.Log("create lock %p (exclusive %v)", lock, exclusive)
|
||||||
|
|
||||||
@@ -79,7 +77,7 @@ func refreshLocks(wg *sync.WaitGroup, done <-chan struct{}) {
|
|||||||
for _, lock := range globalLocks.locks {
|
for _, lock := range globalLocks.locks {
|
||||||
err := lock.Refresh(context.TODO())
|
err := lock.Refresh(context.TODO())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Fprintf(os.Stderr, "unable to refresh lock: %v\n", err)
|
Warnf("unable to refresh lock: %v\n", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
globalLocks.Unlock()
|
globalLocks.Unlock()
|
||||||
|
|||||||
@@ -54,7 +54,7 @@ directories in an encrypted repository stored on different backends.
|
|||||||
if c.Name() == "version" {
|
if c.Name() == "version" {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
pwd, err := resolvePassword(globalOptions)
|
pwd, err := resolvePassword(globalOptions, "RESTIC_PASSWORD")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Fprintf(os.Stderr, "Resolving password failed: %v\n", err)
|
fmt.Fprintf(os.Stderr, "Resolving password failed: %v\n", err)
|
||||||
Exit(1)
|
Exit(1)
|
||||||
@@ -88,6 +88,8 @@ func main() {
|
|||||||
switch {
|
switch {
|
||||||
case restic.IsAlreadyLocked(errors.Cause(err)):
|
case restic.IsAlreadyLocked(errors.Cause(err)):
|
||||||
fmt.Fprintf(os.Stderr, "%v\nthe `unlock` command can be used to remove stale locks\n", err)
|
fmt.Fprintf(os.Stderr, "%v\nthe `unlock` command can be used to remove stale locks\n", err)
|
||||||
|
case err == ErrInvalidSourceData:
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: %v\n", err)
|
||||||
case errors.IsFatal(errors.Cause(err)):
|
case errors.IsFatal(errors.Cause(err)):
|
||||||
fmt.Fprintf(os.Stderr, "%v\n", err)
|
fmt.Fprintf(os.Stderr, "%v\n", err)
|
||||||
case err != nil:
|
case err != nil:
|
||||||
@@ -103,9 +105,13 @@ func main() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var exitCode int
|
var exitCode int
|
||||||
if err != nil {
|
switch err {
|
||||||
|
case nil:
|
||||||
|
exitCode = 0
|
||||||
|
case ErrInvalidSourceData:
|
||||||
|
exitCode = 3
|
||||||
|
default:
|
||||||
exitCode = 1
|
exitCode = 1
|
||||||
}
|
}
|
||||||
|
|
||||||
Exit(exitCode)
|
Exit(exitCode)
|
||||||
}
|
}
|
||||||
|
|||||||
36
cmd/restic/progress.go
Normal file
36
cmd/restic/progress.go
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/restic/restic/internal/restic"
|
||||||
|
)
|
||||||
|
|
||||||
|
// newProgressMax returns a progress that counts blobs.
|
||||||
|
func newProgressMax(show bool, max uint64, description string) *restic.Progress {
|
||||||
|
if !show {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
p := restic.NewProgress()
|
||||||
|
|
||||||
|
p.OnUpdate = func(s restic.Stat, d time.Duration, ticker bool) {
|
||||||
|
status := fmt.Sprintf("[%s] %s %d / %d %s",
|
||||||
|
formatDuration(d),
|
||||||
|
formatPercent(s.Blobs, max),
|
||||||
|
s.Blobs, max, description)
|
||||||
|
|
||||||
|
if w := stdoutTerminalWidth(); w > 0 {
|
||||||
|
status = shortenStatus(w, status)
|
||||||
|
}
|
||||||
|
|
||||||
|
PrintProgress("%s", status)
|
||||||
|
}
|
||||||
|
|
||||||
|
p.OnDone = func(s restic.Stat, d time.Duration, ticker bool) {
|
||||||
|
fmt.Printf("\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
return p
|
||||||
|
}
|
||||||
48
cmd/restic/secondary_repo.go
Normal file
48
cmd/restic/secondary_repo.go
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/restic/restic/internal/errors"
|
||||||
|
"github.com/spf13/pflag"
|
||||||
|
)
|
||||||
|
|
||||||
|
type secondaryRepoOptions struct {
|
||||||
|
Repo string
|
||||||
|
password string
|
||||||
|
PasswordFile string
|
||||||
|
PasswordCommand string
|
||||||
|
KeyHint string
|
||||||
|
}
|
||||||
|
|
||||||
|
func initSecondaryRepoOptions(f *pflag.FlagSet, opts *secondaryRepoOptions, repoPrefix string, repoUsage string) {
|
||||||
|
f.StringVarP(&opts.Repo, "repo2", "", os.Getenv("RESTIC_REPOSITORY2"), repoPrefix+" repository "+repoUsage+" (default: $RESTIC_REPOSITORY2)")
|
||||||
|
f.StringVarP(&opts.PasswordFile, "password-file2", "", os.Getenv("RESTIC_PASSWORD_FILE2"), "`file` to read the "+repoPrefix+" repository password from (default: $RESTIC_PASSWORD_FILE2)")
|
||||||
|
f.StringVarP(&opts.KeyHint, "key-hint2", "", os.Getenv("RESTIC_KEY_HINT2"), "key ID of key to try decrypting the "+repoPrefix+" repository first (default: $RESTIC_KEY_HINT2)")
|
||||||
|
f.StringVarP(&opts.PasswordCommand, "password-command2", "", os.Getenv("RESTIC_PASSWORD_COMMAND2"), "shell `command` to obtain the "+repoPrefix+" repository password from (default: $RESTIC_PASSWORD_COMMAND2)")
|
||||||
|
}
|
||||||
|
|
||||||
|
func fillSecondaryGlobalOpts(opts secondaryRepoOptions, gopts GlobalOptions, repoPrefix string) (GlobalOptions, error) {
|
||||||
|
if opts.Repo == "" {
|
||||||
|
return GlobalOptions{}, errors.Fatal("Please specify a " + repoPrefix + " repository location (--repo2)")
|
||||||
|
}
|
||||||
|
var err error
|
||||||
|
dstGopts := gopts
|
||||||
|
dstGopts.Repo = opts.Repo
|
||||||
|
dstGopts.PasswordFile = opts.PasswordFile
|
||||||
|
dstGopts.PasswordCommand = opts.PasswordCommand
|
||||||
|
dstGopts.KeyHint = opts.KeyHint
|
||||||
|
if opts.password != "" {
|
||||||
|
dstGopts.password = opts.password
|
||||||
|
} else {
|
||||||
|
dstGopts.password, err = resolvePassword(dstGopts, "RESTIC_PASSWORD2")
|
||||||
|
if err != nil {
|
||||||
|
return GlobalOptions{}, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
dstGopts.password, err = ReadPassword(dstGopts, "enter password for "+repoPrefix+" repository: ")
|
||||||
|
if err != nil {
|
||||||
|
return GlobalOptions{}, err
|
||||||
|
}
|
||||||
|
return dstGopts, nil
|
||||||
|
}
|
||||||
@@ -245,7 +245,7 @@ From Source
|
|||||||
***********
|
***********
|
||||||
|
|
||||||
restic is written in the Go programming language and you need at least
|
restic is written in the Go programming language and you need at least
|
||||||
Go version 1.11. Building restic may also work with older versions of Go,
|
Go version 1.13. Building restic may also work with older versions of Go,
|
||||||
but that's not supported. See the `Getting
|
but that's not supported. See the `Getting
|
||||||
started <https://golang.org/doc/install>`__ guide of the Go project for
|
started <https://golang.org/doc/install>`__ guide of the Go project for
|
||||||
instructions how to install Go.
|
instructions how to install Go.
|
||||||
@@ -292,7 +292,7 @@ Restic can write out man pages and bash/zsh compatible autocompletion scripts:
|
|||||||
and the auto-completion files for bash and zsh).
|
and the auto-completion files for bash and zsh).
|
||||||
|
|
||||||
Usage:
|
Usage:
|
||||||
restic generate [command] [flags]
|
restic generate [flags] [command]
|
||||||
|
|
||||||
Flags:
|
Flags:
|
||||||
--bash-completion file write bash completion file
|
--bash-completion file write bash completion file
|
||||||
|
|||||||
@@ -43,9 +43,9 @@ command and enter the same password twice:
|
|||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ restic init --repo /srv/restic-repo
|
$ restic init --repo /srv/restic-repo
|
||||||
enter password for new backend:
|
enter password for new repository:
|
||||||
enter password again:
|
enter password again:
|
||||||
created restic backend 085b3c76b9 at /srv/restic-repo
|
created restic repository 085b3c76b9 at /srv/restic-repo
|
||||||
Please note that knowledge of your password is required to access the repository.
|
Please note that knowledge of your password is required to access the repository.
|
||||||
Losing your password means that your data is irrecoverably lost.
|
Losing your password means that your data is irrecoverably lost.
|
||||||
|
|
||||||
@@ -75,9 +75,9 @@ simply be achieved by changing the URL scheme in the ``init`` command:
|
|||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ restic -r sftp:user@host:/srv/restic-repo init
|
$ restic -r sftp:user@host:/srv/restic-repo init
|
||||||
enter password for new backend:
|
enter password for new repository:
|
||||||
enter password again:
|
enter password again:
|
||||||
created restic backend f1c6108821 at sftp:user@host:/srv/restic-repo
|
created restic repository f1c6108821 at sftp:user@host:/srv/restic-repo
|
||||||
Please note that knowledge of your password is required to access the repository.
|
Please note that knowledge of your password is required to access the repository.
|
||||||
Losing your password means that your data is irrecoverably lost.
|
Losing your password means that your data is irrecoverably lost.
|
||||||
|
|
||||||
@@ -212,9 +212,9 @@ default location:
|
|||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ restic -r s3:s3.amazonaws.com/bucket_name init
|
$ restic -r s3:s3.amazonaws.com/bucket_name init
|
||||||
enter password for new backend:
|
enter password for new repository:
|
||||||
enter password again:
|
enter password again:
|
||||||
created restic backend eefee03bbd at s3:s3.amazonaws.com/bucket_name
|
created restic repository eefee03bbd at s3:s3.amazonaws.com/bucket_name
|
||||||
Please note that knowledge of your password is required to access the repository.
|
Please note that knowledge of your password is required to access the repository.
|
||||||
Losing your password means that your data is irrecoverably lost.
|
Losing your password means that your data is irrecoverably lost.
|
||||||
|
|
||||||
@@ -262,9 +262,9 @@ this command.
|
|||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ ./restic -r s3:http://localhost:9000/restic init
|
$ ./restic -r s3:http://localhost:9000/restic init
|
||||||
enter password for new backend:
|
enter password for new repository:
|
||||||
enter password again:
|
enter password again:
|
||||||
created restic backend 6ad29560f5 at s3:http://localhost:9000/restic1
|
created restic repository 6ad29560f5 at s3:http://localhost:9000/restic1
|
||||||
Please note that knowledge of your password is required to access
|
Please note that knowledge of your password is required to access
|
||||||
the repository. Losing your password means that your data is irrecoverably lost.
|
the repository. Losing your password means that your data is irrecoverably lost.
|
||||||
|
|
||||||
@@ -291,9 +291,9 @@ this command.
|
|||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ ./restic -r s3:https://<WASABI-SERVICE-URL>/<WASABI-BUCKET-NAME> init
|
$ ./restic -r s3:https://<WASABI-SERVICE-URL>/<WASABI-BUCKET-NAME> init
|
||||||
enter password for new backend:
|
enter password for new repository:
|
||||||
enter password again:
|
enter password again:
|
||||||
created restic backend xxxxxxxxxx at s3:https://<WASABI-SERVICE-URL>/<WASABI-BUCKET-NAME>
|
created restic repository xxxxxxxxxx at s3:https://<WASABI-SERVICE-URL>/<WASABI-BUCKET-NAME>
|
||||||
Please note that knowledge of your password is required to access
|
Please note that knowledge of your password is required to access
|
||||||
the repository. Losing your password means that your data is irrecoverably lost.
|
the repository. Losing your password means that your data is irrecoverably lost.
|
||||||
|
|
||||||
@@ -357,9 +357,9 @@ the container does not exist, it will be created automatically:
|
|||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ restic -r swift:container_name:/path init # path is optional
|
$ restic -r swift:container_name:/path init # path is optional
|
||||||
enter password for new backend:
|
enter password for new repository:
|
||||||
enter password again:
|
enter password again:
|
||||||
created restic backend eefee03bbd at swift:container_name:/path
|
created restic repository eefee03bbd at swift:container_name:/path
|
||||||
Please note that knowledge of your password is required to access the repository.
|
Please note that knowledge of your password is required to access the repository.
|
||||||
Losing your password means that your data is irrecoverably lost.
|
Losing your password means that your data is irrecoverably lost.
|
||||||
|
|
||||||
@@ -391,9 +391,9 @@ privilege to create buckets, it will be created automatically:
|
|||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ restic -r b2:bucketname:path/to/repo init
|
$ restic -r b2:bucketname:path/to/repo init
|
||||||
enter password for new backend:
|
enter password for new repository:
|
||||||
enter password again:
|
enter password again:
|
||||||
created restic backend eefee03bbd at b2:bucketname:path/to/repo
|
created restic repository eefee03bbd at b2:bucketname:path/to/repo
|
||||||
Please note that knowledge of your password is required to access the repository.
|
Please note that knowledge of your password is required to access the repository.
|
||||||
Losing your password means that your data is irrecoverably lost.
|
Losing your password means that your data is irrecoverably lost.
|
||||||
|
|
||||||
@@ -420,10 +420,10 @@ root path like this:
|
|||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ restic -r azure:foo:/ init
|
$ restic -r azure:foo:/ init
|
||||||
enter password for new backend:
|
enter password for new repository:
|
||||||
enter password again:
|
enter password again:
|
||||||
|
|
||||||
created restic backend a934bac191 at azure:foo:/
|
created restic repository a934bac191 at azure:foo:/
|
||||||
[...]
|
[...]
|
||||||
|
|
||||||
The number of concurrent connections to the Azure Blob Storage service can be set with the
|
The number of concurrent connections to the Azure Blob Storage service can be set with the
|
||||||
@@ -464,10 +464,10 @@ repository in the bucket ``foo`` at the root path:
|
|||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ restic -r gs:foo:/ init
|
$ restic -r gs:foo:/ init
|
||||||
enter password for new backend:
|
enter password for new repository:
|
||||||
enter password again:
|
enter password again:
|
||||||
|
|
||||||
created restic backend bde47d6254 at gs:foo2/
|
created restic repository bde47d6254 at gs:foo2/
|
||||||
[...]
|
[...]
|
||||||
|
|
||||||
The number of concurrent connections to the GCS service can be set with the
|
The number of concurrent connections to the GCS service can be set with the
|
||||||
|
|||||||
@@ -83,7 +83,7 @@ You can even backup individual files in the same repository (not passing
|
|||||||
snapshot 249d0210 saved
|
snapshot 249d0210 saved
|
||||||
|
|
||||||
If you're interested in what restic does, pass ``--verbose`` twice (or
|
If you're interested in what restic does, pass ``--verbose`` twice (or
|
||||||
``--verbose 2``) to display detailed information about each file and directory
|
``--verbose=2``) to display detailed information about each file and directory
|
||||||
restic encounters:
|
restic encounters:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
@@ -142,7 +142,9 @@ the exclude options are:
|
|||||||
- ``--iexclude`` Same as ``--exclude`` but ignores the case of paths
|
- ``--iexclude`` Same as ``--exclude`` but ignores the case of paths
|
||||||
- ``--exclude-caches`` Specified once to exclude folders containing a special file
|
- ``--exclude-caches`` Specified once to exclude folders containing a special file
|
||||||
- ``--exclude-file`` Specified one or more times to exclude items listed in a given file
|
- ``--exclude-file`` Specified one or more times to exclude items listed in a given file
|
||||||
|
- ``--iexclude-file`` Same as ``exclude-file`` but ignores cases like in ``--iexclude``
|
||||||
- ``--exclude-if-present foo`` Specified one or more times to exclude a folder's content if it contains a file called ``foo`` (optionally having a given header, no wildcards for the file name supported)
|
- ``--exclude-if-present foo`` Specified one or more times to exclude a folder's content if it contains a file called ``foo`` (optionally having a given header, no wildcards for the file name supported)
|
||||||
|
- ``--exclude-larger-than size`` Specified once to excludes files larger than the given size
|
||||||
|
|
||||||
Please see ``restic help backup`` for more specific information about each exclude option.
|
Please see ``restic help backup`` for more specific information about each exclude option.
|
||||||
|
|
||||||
@@ -213,16 +215,47 @@ On most Unixy shells, you can either quote or use backslashes. For example:
|
|||||||
|
|
||||||
By specifying the option ``--one-file-system`` you can instruct restic
|
By specifying the option ``--one-file-system`` you can instruct restic
|
||||||
to only backup files from the file systems the initially specified files
|
to only backup files from the file systems the initially specified files
|
||||||
or directories reside on. For example, calling restic like this won't
|
or directories reside on. In other words, it will prevent restic from crossing
|
||||||
backup ``/sys`` or ``/dev`` on a Linux system:
|
filesystem boundaries when performing a backup.
|
||||||
|
|
||||||
|
For example, if you backup ``/`` with this option and you have external
|
||||||
|
media mounted under ``/media/usb`` then restic will not back up ``/media/usb``
|
||||||
|
at all because this is a different filesystem than ``/``. Virtual filesystems
|
||||||
|
such as ``/proc`` are also considered different and thereby excluded when
|
||||||
|
using ``--one-file-system``:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ restic -r /srv/restic-repo backup --one-file-system /
|
$ restic -r /srv/restic-repo backup --one-file-system /
|
||||||
|
|
||||||
|
Please note that this does not prevent you from specifying multiple filesystems
|
||||||
|
on the command line, e.g:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ restic -r /srv/restic-repo backup --one-file-system / /media/usb
|
||||||
|
|
||||||
|
will back up both the ``/`` and ``/media/usb`` filesystems, but will not
|
||||||
|
include other filesystems like ``/sys`` and ``/proc``.
|
||||||
|
|
||||||
.. note:: ``--one-file-system`` is currently unsupported on Windows, and will
|
.. note:: ``--one-file-system`` is currently unsupported on Windows, and will
|
||||||
cause the backup to immediately fail with an error.
|
cause the backup to immediately fail with an error.
|
||||||
|
|
||||||
|
Files larger than a given size can be excluded using the `--exclude-larger-than`
|
||||||
|
option:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ restic -r /srv/restic-repo backup ~/work --exclude-larger-than 1M
|
||||||
|
|
||||||
|
This excludes files in ``~/work`` which are larger than 1 MB from the backup.
|
||||||
|
|
||||||
|
The default unit for the size value is bytes, so e.g. ``--exclude-larger-than 2048``
|
||||||
|
would exclude files larger than 2048 bytes (2 kilobytes). To specify other units,
|
||||||
|
suffix the size value with one of ``k``/``K`` for kilobytes, ``m``/``M`` for megabytes,
|
||||||
|
``g``/``G`` for gigabytes and ``t``/``T`` for terabytes (e.g. ``1k``, ``10K``, ``20m``,
|
||||||
|
``20M``, ``30g``, ``30G``, ``2t`` or ``2T``).
|
||||||
|
|
||||||
Including Files
|
Including Files
|
||||||
***************
|
***************
|
||||||
|
|
||||||
@@ -366,12 +399,11 @@ created as it would only be written at the very (successful) end of
|
|||||||
the backup operation. Previous snapshots will still be there and will still
|
the backup operation. Previous snapshots will still be there and will still
|
||||||
work.
|
work.
|
||||||
|
|
||||||
|
|
||||||
Environment Variables
|
Environment Variables
|
||||||
*********************
|
*********************
|
||||||
|
|
||||||
In addition to command-line options, restic supports passing various options in
|
In addition to command-line options, restic supports passing various options in
|
||||||
environment variables. The following list of environment variables:
|
environment variables. The following lists these environment variables:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@@ -379,9 +411,13 @@ environment variables. The following list of environment variables:
|
|||||||
RESTIC_PASSWORD_FILE Location of password file (replaces --password-file)
|
RESTIC_PASSWORD_FILE Location of password file (replaces --password-file)
|
||||||
RESTIC_PASSWORD The actual password for the repository
|
RESTIC_PASSWORD The actual password for the repository
|
||||||
RESTIC_PASSWORD_COMMAND Command printing the password for the repository to stdout
|
RESTIC_PASSWORD_COMMAND Command printing the password for the repository to stdout
|
||||||
|
RESTIC_KEY_HINT ID of key to try decrypting first, before other keys
|
||||||
|
RESTIC_CACHE_DIR Location of the cache directory
|
||||||
|
RESTIC_PROGRESS_FPS Frames per second by which the progress bar is updated
|
||||||
|
|
||||||
AWS_ACCESS_KEY_ID Amazon S3 access key ID
|
AWS_ACCESS_KEY_ID Amazon S3 access key ID
|
||||||
AWS_SECRET_ACCESS_KEY Amazon S3 secret access key
|
AWS_SECRET_ACCESS_KEY Amazon S3 secret access key
|
||||||
|
AWS_DEFAULT_REGION Amazon S3 default region
|
||||||
|
|
||||||
ST_AUTH Auth URL for keystone v1 authentication
|
ST_AUTH Auth URL for keystone v1 authentication
|
||||||
ST_USER Username for keystone v1 authentication
|
ST_USER Username for keystone v1 authentication
|
||||||
@@ -416,5 +452,33 @@ environment variables. The following list of environment variables:
|
|||||||
|
|
||||||
RCLONE_BWLIMIT rclone bandwidth limit
|
RCLONE_BWLIMIT rclone bandwidth limit
|
||||||
|
|
||||||
|
In addition to restic-specific environment variables, the following system-wide environment variables
|
||||||
|
are taken into account for various operations:
|
||||||
|
|
||||||
|
* ``$XDG_CACHE_HOME/restic``, ``$HOME/.cache/restic``: :ref:`caching`.
|
||||||
|
* ``$TMPDIR``: :ref:`temporary_files`.
|
||||||
|
* ``$PATH/fusermount``: Binary for ``restic mount``.
|
||||||
|
|
||||||
|
|
||||||
|
Exit status codes
|
||||||
|
*****************
|
||||||
|
|
||||||
|
Restic returns one of the following exit status codes after the backup command is run:
|
||||||
|
|
||||||
|
* 0 when the backup was successful (snapshot with all source files created)
|
||||||
|
* 1 when there was a fatal error (no snapshot created)
|
||||||
|
* 3 when some source files could not be read (incomplete snapshot with remaining files created)
|
||||||
|
|
||||||
|
Fatal errors occur for example when restic is unable to write to the backup destination, when
|
||||||
|
there are network connectivity issues preventing successful communication, or when an invalid
|
||||||
|
password or command line argument is provided. When restic returns this exit status code, one
|
||||||
|
should not expect a snapshot to have been created.
|
||||||
|
|
||||||
|
Source file read errors occur when restic fails to read one or more files or directories that
|
||||||
|
it was asked to back up, e.g. due to permission problems. Restic displays the number of source
|
||||||
|
file read errors that occurred while running the backup. If there are errors of this type,
|
||||||
|
restic will still try to complete the backup run with all the other files, and create a
|
||||||
|
snapshot that then contains all but the unreadable files.
|
||||||
|
|
||||||
|
One can use these exit status codes in scripts and other automation tools, to make them aware of
|
||||||
|
the outcome of the backup run. To manually inspect the exit code in e.g. Linux, run ``echo $?``.
|
||||||
|
|||||||
@@ -82,6 +82,89 @@ Furthermore you can group the output by the same filters (host, paths, tags):
|
|||||||
1 snapshots
|
1 snapshots
|
||||||
|
|
||||||
|
|
||||||
|
Copying snapshots between repositories
|
||||||
|
======================================
|
||||||
|
|
||||||
|
In case you want to transfer snapshots between two repositories, for
|
||||||
|
example from a local to a remote repository, you can use the ``copy`` command:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ restic -r /srv/restic-repo copy --repo2 /srv/restic-repo-copy
|
||||||
|
repository d6504c63 opened successfully, password is correct
|
||||||
|
repository 3dd0878c opened successfully, password is correct
|
||||||
|
|
||||||
|
snapshot 410b18a2 of [/home/user/work] at 2020-06-09 23:15:57.305305 +0200 CEST)
|
||||||
|
copy started, this may take a while...
|
||||||
|
snapshot 7a746a07 saved
|
||||||
|
|
||||||
|
snapshot 4e5d5487 of [/home/user/work] at 2020-05-01 22:44:07.012113 +0200 CEST)
|
||||||
|
skipping snapshot 4e5d5487, was already copied to snapshot 50eb62b7
|
||||||
|
|
||||||
|
The example command copies all snapshots from the source repository
|
||||||
|
``/srv/restic-repo`` to the destination repository ``/srv/restic-repo-copy``.
|
||||||
|
Snapshots which have previously been copied between repositories will
|
||||||
|
be skipped by later copy runs.
|
||||||
|
|
||||||
|
.. note:: Note that this process will have to read (download) and write (upload) the
|
||||||
|
entire snapshot(s) due to the different encryption keys used in the source and
|
||||||
|
destination repository. Also, the transferred files are not re-chunked, which
|
||||||
|
may break deduplication between files already stored in the destination repo
|
||||||
|
and files copied there using this command. See the next section for how to avoid
|
||||||
|
this problem.
|
||||||
|
|
||||||
|
For the destination repository ``--repo2`` the password can be read from
|
||||||
|
a file ``--password-file2`` or from a command ``--password-command2``.
|
||||||
|
Alternatively the environment variables ``$RESTIC_PASSWORD_COMMAND2`` and
|
||||||
|
``$RESTIC_PASSWORD_FILE2`` can be used. It is also possible to directly
|
||||||
|
pass the password via ``$RESTIC_PASSWORD2``. The key which should be used
|
||||||
|
for decryption can be selected by passing its ID via the flag ``--key-hint2``
|
||||||
|
or the environment variable ``$RESTIC_KEY_HINT2``.
|
||||||
|
|
||||||
|
In case the source and destination repository use the same backend, then
|
||||||
|
configuration options and environment variables to configure the backend
|
||||||
|
apply to both repositories. For example it might not be possible to specify
|
||||||
|
different accounts for the source and destination repository. You can
|
||||||
|
avoid this limitation by using the rclone backend along with remotes which
|
||||||
|
are configured in rclone.
|
||||||
|
|
||||||
|
The list of snapshots to copy can be filtered by host, path in the backup
|
||||||
|
and / or a comma-separated tag list:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ restic -r /srv/restic-repo copy --repo2 /srv/restic-repo-copy --host luigi --path /srv --tag foo,bar
|
||||||
|
|
||||||
|
It is also possible to explicitly specify the list of snapshots to copy, in
|
||||||
|
which case only these instead of all snapshots will be copied:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ restic -r /srv/restic-repo copy --repo2 /srv/restic-repo-copy 410b18a2 4e5d5487 latest
|
||||||
|
|
||||||
|
|
||||||
|
Ensuring deduplication for copied snapshots
|
||||||
|
-------------------------------------------
|
||||||
|
|
||||||
|
Even though the copy command can transfer snapshots between arbitrary repositories,
|
||||||
|
deduplication between snapshots from the source and destination repository may not work.
|
||||||
|
To ensure proper deduplication, both repositories have to use the same parameters for
|
||||||
|
splitting large files into smaller chunks, which requires additional setup steps. With
|
||||||
|
the same parameters restic will for both repositories split identical files into
|
||||||
|
identical chunks and therefore deduplication also works for snapshots copied between
|
||||||
|
these repositories.
|
||||||
|
|
||||||
|
The chunker parameters are generated once when creating a new (destination) repository.
|
||||||
|
That is for a copy destination repository we have to instruct restic to initialize it
|
||||||
|
using the same chunker parameters as the source repository:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ restic -r /srv/restic-repo-copy init --repo2 /srv/restic-repo --copy-chunker-params
|
||||||
|
|
||||||
|
Note that it is not possible to change the chunker parameters of an existing repository.
|
||||||
|
|
||||||
|
|
||||||
Checking integrity and consistency
|
Checking integrity and consistency
|
||||||
==================================
|
==================================
|
||||||
|
|
||||||
@@ -134,10 +217,10 @@ If the repository structure is intact, restic will show that no errors were foun
|
|||||||
check snapshots, trees and blobs
|
check snapshots, trees and blobs
|
||||||
no errors were found
|
no errors were found
|
||||||
|
|
||||||
By default, the ``check`` command does not verify that the actual data files
|
By default, the ``check`` command does not verify that the actual pack files
|
||||||
on disk in the repository are unmodified, because doing so requires reading
|
on disk in the repository are unmodified, because doing so requires reading
|
||||||
a copy of every data file in the repository. To tell restic to also verify the
|
a copy of every pack file in the repository. To tell restic to also verify the
|
||||||
integrity of the data files in the repository, use the ``--read-data`` flag:
|
integrity of the pack files in the repository, use the ``--read-data`` flag:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@@ -151,16 +234,16 @@ integrity of the data files in the repository, use the ``--read-data`` flag:
|
|||||||
duration: 0:00
|
duration: 0:00
|
||||||
no errors were found
|
no errors were found
|
||||||
|
|
||||||
.. note:: Since ``--read-data`` has to download all data files in the
|
.. note:: Since ``--read-data`` has to download all pack files in the
|
||||||
repository, beware that it might incur higher bandwidth costs than usual
|
repository, beware that it might incur higher bandwidth costs than usual
|
||||||
and also that it takes more time than the default ``check``.
|
and also that it takes more time than the default ``check``.
|
||||||
|
|
||||||
Alternatively, use the ``--read-data-subset=n/t`` parameter to check only a
|
Alternatively, use the ``--read-data-subset=n/t`` parameter to check only a
|
||||||
subset of the repository data files at a time. The parameter takes two values,
|
subset of the repository pack files at a time. The parameter takes two values,
|
||||||
``n`` and ``t``. When the check command runs, all data files in the repository
|
``n`` and ``t``. When the check command runs, all pack files in the repository
|
||||||
are logically divided in ``t`` (roughly equal) groups, and only files that
|
are logically divided in ``t`` (roughly equal) groups, and only files that
|
||||||
belong to group number ``n`` are checked. For example, the following commands
|
belong to group number ``n`` are checked. For example, the following commands
|
||||||
check all repository data files over 5 separate invocations:
|
check all repository pack files over 5 separate invocations:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
|
|||||||
@@ -52,7 +52,7 @@ You can use the command ``restic ls latest`` or ``restic find foo`` to find the
|
|||||||
path to the file within the snapshot. This path you can then pass to
|
path to the file within the snapshot. This path you can then pass to
|
||||||
``--include`` in verbatim to only restore the single file or directory.
|
``--include`` in verbatim to only restore the single file or directory.
|
||||||
|
|
||||||
There are case insensitive variants of of ``--exclude`` and ``--include`` called
|
There are case insensitive variants of ``--exclude`` and ``--include`` called
|
||||||
``--iexclude`` and ``--iinclude``. These options will behave the same way but
|
``--iexclude`` and ``--iinclude``. These options will behave the same way but
|
||||||
ignore the casing of paths.
|
ignore the casing of paths.
|
||||||
|
|
||||||
|
|||||||
@@ -176,7 +176,7 @@ Multiple policies will be ORed together so as to be as inclusive as possible
|
|||||||
for keeping snapshots.
|
for keeping snapshots.
|
||||||
|
|
||||||
Additionally, you can restrict removing snapshots to those which have a
|
Additionally, you can restrict removing snapshots to those which have a
|
||||||
particular hostname with the ``--hostname`` parameter, or tags with the
|
particular hostname with the ``--host`` parameter, or tags with the
|
||||||
``--tag`` option. When multiple tags are specified, only the snapshots
|
``--tag`` option. When multiple tags are specified, only the snapshots
|
||||||
which have all the tags are considered. For example, the following command
|
which have all the tags are considered. For example, the following command
|
||||||
removes all but the latest snapshot of all snapshots that have the tag ``foo``:
|
removes all but the latest snapshot of all snapshots that have the tag ``foo``:
|
||||||
|
|||||||
@@ -312,12 +312,12 @@ the backups:
|
|||||||
root@a3e580b6369d:/# useradd -m restic
|
root@a3e580b6369d:/# useradd -m restic
|
||||||
|
|
||||||
Then we download and install the restic binary into the user's home
|
Then we download and install the restic binary into the user's home
|
||||||
directory.
|
directory (please adjust the URL to refer to the latest restic version).
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
root@a3e580b6369d:/# mkdir ~restic/bin
|
root@a3e580b6369d:/# mkdir ~restic/bin
|
||||||
root@a3e580b6369d:/# curl -L https://github.com/restic/restic/releases/download/v0.9.1/restic_0.9.1_linux_amd64.bz2 | bunzip2 > ~restic/bin/restic
|
root@a3e580b6369d:/# curl -L https://github.com/restic/restic/releases/download/v0.9.6/restic_0.9.6_linux_amd64.bz2 | bunzip2 > ~restic/bin/restic
|
||||||
|
|
||||||
Before we assign any special capability to the restic binary we
|
Before we assign any special capability to the restic binary we
|
||||||
restrict its permissions so that only root and the newly created
|
restrict its permissions so that only root and the newly created
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user