Compare commits

..

386 Commits

Author SHA1 Message Date
Michael Eischer
e1bb2129ad report result of delete operation after failed upload 2021-03-08 21:54:46 +01:00
Michael Eischer
95b7f8dd81 s3: Fix sanity check
The sanity check shouldn't replace the error message if there is already
one.
2021-03-08 21:54:46 +01:00
Michael Eischer
29e39e247a stat file after retrying an upload 2021-03-08 21:54:46 +01:00
Alexander Neumann
27f241334e Add version for 0.12.0 2021-02-14 11:44:20 +01:00
Alexander Neumann
4e99a3d650 Update manpages and auto-completion 2021-02-14 11:44:20 +01:00
Alexander Neumann
1cb1cd6f44 Generate CHANGELOG.md for 0.12.0 2021-02-14 11:44:00 +01:00
Alexander Neumann
1a34260cf0 Prepare changelog for 0.12.0 2021-02-14 11:44:00 +01:00
Alexander Neumann
13d52c88fb Merge pull request #3282 from restic/changelog-unreleased
changelog: Correct and slightly polish some unreleased changelog files
2021-02-14 10:49:33 +01:00
Leo R. Lundgren
4b5ca1e914 changelog: Correct and slightly polish some unreleased changelog files 2021-02-14 00:57:54 +01:00
MichaelEischer
917f5b910a Merge pull request #3270 from restic/fix-ls-json
ls: Check for non-nil error before calling panic()
2021-02-09 22:05:04 +01:00
Alexander Neumann
c0f2c1d871 ls: Check for non-nil error before calling panic() 2021-02-07 21:12:54 +01:00
Alexander Neumann
9985368d46 Merge pull request #3255 from MichaelEischer/restorer-check-error
restorer: Check dropped error
2021-02-03 16:10:46 +01:00
Alexander Neumann
2dd592a06c Merge pull request #3254 from lorenz/allow-http2
Allow HTTP/2
2021-02-02 21:08:20 +01:00
Lorenz Brun
362338dd60 Add changelog entry 2021-02-01 20:20:34 +01:00
MichaelEischer
6ac032be64 Merge pull request #3256 from aawsome/rebuild-index-bar-done
rebuild-index: Add missing bar.Done()
2021-01-31 18:48:14 +01:00
MichaelEischer
0ce05d5725 Merge pull request #3236 from Wouter0100/patch-1
doc: Note only path-style URL support for S3
2021-01-31 18:44:58 +01:00
Michael Eischer
0aed8d47d7 docs: Fix typo in S3 URL support 2021-01-31 18:33:15 +01:00
Alexander Weiss
39a26066f7 rebuild-index: add missing bar.Done() 2021-01-31 18:28:02 +01:00
Michael Eischer
47faf69230 restorer: Check dropped error 2021-01-31 18:06:28 +01:00
MichaelEischer
b3dc127af5 Merge pull request #3207 from aawsome/filerestorer-fix-errorhandling
Fix error handling in filerestorer and fix #3166
2021-01-31 17:47:46 +01:00
Wouter van Os
8442c43209 doc: Update wording for S3 path-only requirement 2021-01-31 15:10:03 +01:00
Alexander Weiss
6e942693ba Fix #3166 2021-01-31 14:22:57 +01:00
Alexander Weiss
5e22ae10f1 Add error handling for fileRestorer 2021-01-31 14:22:57 +01:00
Alexander Weiss
573221aa40 Add failing test for fileRestorer 2021-01-31 13:40:42 +01:00
MichaelEischer
b8550a21f2 Merge pull request #3253 from restic/improve-prune-error
prune: Improve error message for missing files
2021-01-31 12:03:20 +01:00
Alexander Neumann
027a51529d prune: Improve error message for missing files
This commit changes the error message so that a list of file names is
printed. Before, just the raw map was printed, which is not a great user
interface.
2021-01-31 11:31:27 +01:00
Lorenz Brun
5427119205 Allow HTTP/2 2021-01-31 02:44:30 +01:00
Alexander Neumann
f647614e24 Merge pull request #3248 from MichaelEischer/backend-cleanups
Backend code and test cleanups
2021-01-30 21:37:33 +01:00
Michael Eischer
e0867c9682 backend: try to cleanup test leftovers 2021-01-30 21:23:20 +01:00
Michael Eischer
f740b2fb23 mem: check upload length before storing upload 2021-01-30 21:23:20 +01:00
Alexander Neumann
0e5f2fff71 Merge pull request #3243 from restic/fix-scanner-overlap
backup: Fix total size for overlapping targets
2021-01-30 21:17:21 +01:00
MichaelEischer
99228be623 Merge pull request #3250 from restic/add-golangci-lint-config
Add golangci lint config, add many error checks
2021-01-30 21:09:38 +01:00
Alexander Neumann
04ca69cc78 Address issues reported by golint 2021-01-30 20:45:57 +01:00
Alexander Neumann
f867e65bcd Fix issues reported by staticcheck 2021-01-30 20:43:53 +01:00
Alexander Neumann
a00e27adf6 Add entry to changelog 2021-01-30 20:22:25 +01:00
Alexander Neumann
0858fbf6aa Add more error handling 2021-01-30 20:19:47 +01:00
Alexander Neumann
aef3658a5f Address review comments 2021-01-30 20:02:37 +01:00
Alexander Neumann
200f09522d Add more error checks 2021-01-30 20:02:37 +01:00
Alexander Neumann
cbd88c457a backup: Improve error handling 2021-01-30 20:02:37 +01:00
Alexander Neumann
1a0eb05bfa errcheck: Add more error checks 2021-01-30 20:02:37 +01:00
Alexander Neumann
3c753c071c errcheck: More error handling 2021-01-30 20:02:37 +01:00
Alexander Neumann
16313bfcc9 errcheck: Add error check for MergeFinalIndexes() 2021-01-30 20:02:37 +01:00
Alexander Neumann
75f53955ee errcheck: Add error checks
Most added checks are straight forward.
2021-01-30 20:02:37 +01:00
Alexander Neumann
1632a84e7b Add a section explaining golangci-lint 2021-01-30 20:02:37 +01:00
Alexander Neumann
b3d5bf7c99 Update golangci-lint version 2021-01-30 20:02:37 +01:00
Alexander Neumann
57627a307f Add config for golangci-lint 2021-01-30 20:02:37 +01:00
Alexander Neumann
6ab7d49a03 Merge pull request #3251 from MichaelEischer/rest-dropped-error
rest: handle dropped error in save operation
2021-01-30 19:44:50 +01:00
Michael Eischer
a53778cd83 rest: handle dropped error in save operation 2021-01-30 19:25:04 +01:00
Alexander Neumann
dd94efb307 Merge pull request #3249 from MichaelEischer/fix-dropped-gs-error
gs: Don't drop error when finishing upload
2021-01-30 16:24:57 +01:00
Michael Eischer
8a486eafed gs: Don't drop error when finishing upload
The error returned when finishing the upload of an object was dropped.
This could cause silent upload failures and thus data loss in certain
cases. When a MD5 hash for the uploaded blob is specified, a wrong
hash/damaged upload would return its error via the Close() whose error
was dropped.
2021-01-30 13:31:32 +01:00
MichaelEischer
4d576c2f79 Merge pull request #3217 from M4a1x/read-data-subset-fix
Fix missing rand seed for restic check --read-data-subset=x%
2021-01-29 23:34:13 +01:00
Max Stabel
f9e1fa26ff Fix missing rand seed for restic check --read-data-subset=x% 2021-01-29 23:18:35 +01:00
MichaelEischer
fb3cf3f885 Merge pull request #3245 from aawsome/prune-fix-statistics-cacheable
prune: Fix statistics for --repack-cacheable-only
2021-01-29 23:14:49 +01:00
Alexander Weiss
e08e65dc30 prune: Simplify logic selecting packs to repack 2021-01-29 22:27:22 +01:00
Alexander Weiss
daeb4cdf8f prune: Fix statistics for --repack-cacheable-only 2021-01-29 22:27:22 +01:00
Alexander Neumann
cdd704920d azure: Pass data length to Azure libray
The azureAdapter was used directly without a pointer, but the Len()
method was only defined with a pointer receiver (which means Len() is
not present on a azureAdapter{}, only on a pointer to it).
2021-01-29 21:08:41 +01:00
Alexander Neumann
bbdf18c4a2 Merge pull request #3176 from MichaelEischer/backend-content-length
Pass upload size to backends and sanity check it
2021-01-29 20:33:44 +01:00
Michael Eischer
1f583b3d8e backend: test that incomplete uploads fail 2021-01-29 13:51:53 +01:00
Michael Eischer
c73316a111 backends: add sanity check for the uploaded file size
Bugs in the error handling while uploading a file to the backend could
cause incomplete files, e.g. https://github.com/golang/go/issues/42400
which could affect the local backend.

Proactively add sanity checks which will treat an upload as failed if
the reported upload size does not match the actual file size.
2021-01-29 13:51:51 +01:00
Michael Eischer
4526d5d197 swift: explicitly pass upload size to library
This allows properly setting the content-length which could help the
server-side to detect incomplete uploads.
2021-01-29 13:50:46 +01:00
Michael Eischer
dca9b6f5db azure: explicitly pass upload size
Previously the fallback from the azure library was to read the whole
blob into memory and use that to determine the upload size.
2021-01-29 13:50:46 +01:00
Alexander Neumann
a16ce65295 Merge pull request #3244 from MichaelEischer/better-damage-reports
Print more details about possible repository damages
2021-01-29 11:45:45 +01:00
Alexander Neumann
5c41120c70 Add entry to changelog 2021-01-29 11:31:36 +01:00
Alexander Neumann
5c617859ab backup/scanner: Fix total size for overlapping targets
Before, the scanner would could files twice if they were included in the
list of backup targets twice, e.g. `restic backup foo foo/bar` would
could the file `foo/bar` twice.

This commit uses the tree structure from the archiver to run the
scanner, so both parts see the same files.
2021-01-29 11:31:36 +01:00
Alexander Neumann
81211750ba archiver/tree: Introduce functions Leaf() and NodeNames() 2021-01-29 11:11:28 +01:00
rawtaz
de7e3a0648 Merge pull request #2823 from greatroar/trust-mtime
Add --ignore-ctime flag to backup and document change detection
2021-01-29 00:02:19 +01:00
greatroar
6bd8a2faaa backup: Add --ignore-ctime option and document change detection 2021-01-28 23:42:10 +01:00
Michael Eischer
58b5679f14 prune: reword missing blobs error
The previous wording could be understood such that the prune run did
damage the repository.
2021-01-28 21:48:24 +01:00
Michael Eischer
7b8886c052 prune: report missing but unneeded pack files
This indicates a damaged repository so add some output to help with
debugging.
2021-01-28 21:46:01 +01:00
Michael Eischer
ff95999246 rebuild-index: report added/removed/reindexed files
This should help with investigating missing pack files.
2021-01-28 21:46:01 +01:00
Michael Eischer
b71c52797a find: correctly expand multiple blob ids
For example `restic find --show-pack-id --blob f78dc991 5b9e4366 ddd8c7d4`
would previously only expand one blob if all of them belong to the same
file.
2021-01-28 21:21:54 +01:00
MichaelEischer
82140967d3 Merge pull request #3228 from aawsome/prune-all-trees
prune: Remove all unused trees
2021-01-28 21:04:27 +01:00
MichaelEischer
43cb26010a Merge pull request #3242 from greatroar/fprintln
internal/ui/termstatus: Use Fprintln to get a newline
2021-01-28 20:34:06 +01:00
MichaelEischer
35033d9b79 Merge pull request #3177 from MichaelEischer/fix-2759
prune: don't print stacktrace on console
2021-01-28 20:28:45 +01:00
Alexander Neumann
84822d44d4 Merge pull request #2536 from MatthewVance/threat-model
Update threat model
2021-01-28 14:26:03 +01:00
Matthew Vance
58c7f4694d Update threat model 2021-01-28 14:20:42 +01:00
Alexander Neumann
4d40c70214 Merge pull request #3211 from MichaelEischer/sftp-speedup
Speed-up caching/pack download via SFTP
2021-01-28 14:16:55 +01:00
Alexander Neumann
44169d0dc4 Merge pull request #3205 from MichaelEischer/fix-quiet-verbose
Properly check that --quiet and --verbose are not combined
2021-01-28 13:53:31 +01:00
Alexander Neumann
6aa7e9f9c6 Merge pull request #3174 from MichaelEischer/parallelize-lock-loading
Parallelize lock file loading
2021-01-28 13:52:12 +01:00
Alexander Neumann
bdfedf1f5b Merge pull request #3173 from MichaelEischer/unify-index-loading
Unify index loading
2021-01-28 13:50:42 +01:00
greatroar
b9cfe6f68a internal/ui/termstatus: Use Fprintln to get a newline 2021-01-28 13:30:10 +01:00
Alexander Neumann
72eec8c0c4 Merge pull request #3106 from MichaelEischer/parallel-tree-walk
Parallelize tree walk in prune and copy and add progress bar to check
2021-01-28 12:06:42 +01:00
Michael Eischer
68608a89ad restic: add comment about StreamTrees shutdown 2021-01-28 11:10:50 +01:00
Michael Eischer
1e306be000 Add changelog entry 2021-01-28 11:10:50 +01:00
Michael Eischer
ddb7697d29 restic: Test progress reporting of StreamTrees 2021-01-28 11:10:50 +01:00
Michael Eischer
313ad0e32f progress/counter: Fix test for final report call 2021-01-28 11:10:50 +01:00
Michael Eischer
e2b0072441 check: add progress bar to the tree structure check 2021-01-28 11:10:50 +01:00
Michael Eischer
505f8a2229 progress/counter: Support updating the progress bar maximum 2021-01-28 11:10:47 +01:00
Michael Eischer
eda8c67616 restic: let FindUsedBlobs handle multiple snapshots at once 2021-01-28 11:08:43 +01:00
Michael Eischer
258ce0c1e5 parallel: report progress for StreamTrees
This assigns an id to each tree root and then keeps track of how many
tree loads (i.e. trees referenced for the first time) are pending per
tree root. Once a tree root and its subtrees were fully processed there
are no more pending tree loads and the tree root is reported as
processed.
2021-01-28 11:08:43 +01:00
Michael Eischer
3d6a3e2555 copy: Remove treeCloner struct 2021-01-28 11:08:43 +01:00
Michael Eischer
0caad1e890 copy: parallelize tree walk 2021-01-28 11:08:43 +01:00
Michael Eischer
f2a1b125cb restic: Actually parallelize FindUsedBlobs 2021-01-28 11:08:43 +01:00
Michael Eischer
6e03f80ca2 check: Split the parallelized tree loader into a reusable component
The actual code change is minimal
2021-01-28 11:08:43 +01:00
Michael Eischer
1d7bb01a6b check: Cleanup tree loading and switch to use errgroup
The helper methods are now wired up in the Structure method.
2021-01-28 11:08:43 +01:00
Alexander Neumann
a4689eb3b9 Merge pull request #3199 from MichaelEischer/non-interactive-counter
Don't print progress on non-interactive terminals
2021-01-28 10:53:38 +01:00
Alexander Neumann
c5a66e9181 ui: Simlify channel receive 2021-01-28 10:42:02 +01:00
Wouter van Os
b5972f184c doc: Note only path-style URL support for S3
This adds a node to mention that currently only path-style URL's are supported for S3, as per code at:
- aa0faa8c7d/internal/backend/s3/config.go (L42-L45)
- aa0faa8c7d/internal/backend/s3/config.go (L48-L62)
2021-01-23 16:54:08 +01:00
Alexander Weiss
d7dc19a496 prune: Always repack packs containing tree blobs 2021-01-15 16:42:04 +01:00
Michael Eischer
f3442ce8a5 Test that WriteTo of a backend's Load remains accessible 2021-01-03 22:23:53 +01:00
Michael Eischer
678e75e1c2 sftp: enforce use of optimized upload method
ReadFrom was already used by Save before, this just ensures that this
won't accidentally change in the future.
2021-01-03 22:23:53 +01:00
Michael Eischer
6b5b29dbee limiter: add unit tests 2021-01-03 22:23:53 +01:00
Michael Eischer
f35f2c48cd limiter: support WriteTo in LimitBackend for read rate limiting 2021-01-03 22:23:53 +01:00
Michael Eischer
bcb852a8d0 hashing: support WriteTo in the reader 2021-01-03 22:23:53 +01:00
MichaelEischer
aa0faa8c7d Merge pull request #3208 from restic/add-mips
Add mips* architectures to CI and release
2021-01-03 16:06:10 +01:00
MichaelEischer
f7ec263a22 Merge pull request #3109 from aawsome/optimize-filerestorer
restore: Don't save pack content in memory
2021-01-03 14:53:41 +01:00
Alexander Neumann
7d665fa1f4 Add entry to changelog 2021-01-03 14:41:11 +01:00
Michael Eischer
69d5b4c36b restorer: lower-case variable name 2021-01-03 13:55:59 +01:00
Alexander Neumann
36db248e30 Split cross compilation targets into two jobs 2021-01-03 10:50:54 +01:00
Alexander Neumann
eb72b10f55 Add mips* architectures to CI and release 2021-01-03 10:41:54 +01:00
Alexander Neumann
622f4c7daa Run cloud backends also on pushes 2021-01-03 10:38:10 +01:00
Alexander Neumann
f8c50394d6 Revert "Run cloud tests if secrets are available"
This reverts commit aa648bdcac.
2021-01-03 10:28:01 +01:00
Alexander Neumann
aa648bdcac Run cloud tests if secrets are available 2021-01-03 10:26:55 +01:00
Alexander Neumann
e8abc79ce9 Add hints for keeping the list of architectures in sync 2021-01-01 10:09:04 +01:00
Alexander Weiss
34a33565c8 Fix loadBlob in filerestorer 2021-01-01 08:06:04 +01:00
Alexander Weiss
7409225fa8 Add filerestorer test where only parts of pack are used 2021-01-01 07:24:46 +01:00
Alexander Weiss
07b3f65a6f filesrestorer: Re-use buffer 2021-01-01 07:24:46 +01:00
Alexander Weiss
3e0acf1395 restore: Don't save (part of) pack in memory 2021-01-01 07:24:46 +01:00
Michael Eischer
97388b3504 Properly check that --quiet and --verbose are not combined
If --verbose is specified once, then globalOptions.Verbose == 1.
Previously --quiet --verbose would silently ignore the --verbose flag.
2020-12-30 21:24:18 +01:00
Alexander Neumann
8b84c96d9d Merge pull request #3204 from MichaelEischer/archiver-tomb-race
archiver: fix race condition during worker startup
2020-12-30 20:04:20 +01:00
Michael Eischer
debc4a3a99 archiver: fix race condition during worker startup
When the tomb is created with a canceled context, then the workers
started via `t.Go` exist nearly immediately. Once for the first time all
started goroutines have been stopped, it is not allowed to issue further
calls to `t.Go`. This is a problem when the started goroutines exit
immediately, as for example the first goroutine might already have
stopped before starting the second one, which is not allowed as once the
first goroutines has stopped no goroutines were running.

To fix this race condition the startup and main task of the archiver now
also run within a `t.Go` function. This also allows unifying the error
handling as it is no longer necessary to distinguish between errors
returned by the workers or the saveTree processing. The tomb now just
returns the first error encountered, which should also be the most
descriptive one.
2020-12-30 17:31:22 +01:00
MichaelEischer
e1efc193e1 Merge pull request #3139 from aawsome/prune-healing
prune: Add healing of repository in some situations
2020-12-29 22:17:27 +01:00
Alexander Weiss
f0113139ea prune: Correct error message 2020-12-29 20:20:05 +01:00
Alexander Weiss
f6df94a50e prune: Add self-healing
Allow prune to heal situations where blobs in the index are missing or
the corresponding packfiles are damaged if those blobs are not needed.
2020-12-29 20:20:05 +01:00
MichaelEischer
31e56f1ad5 Merge pull request #3197 from SkYNewZ/fix/3183
Fix tag handling for multiple tag lists
2020-12-29 18:44:38 +01:00
MichaelEischer
7fda2f2ad8 Merge pull request #3134 from greatroar/unlock-warn
Warn when unlock fails instead of returning an error
2020-12-29 18:30:08 +01:00
greatroar
dec5008369 Warn when unlock fails instead of returning an error
Only one caller was checking the error.
2020-12-29 17:48:20 +01:00
Quentin Lemaire
873505ed3b Update related changelog 2020-12-29 17:12:46 +01:00
Quentin Lemaire
25ecf9eafb fix(cmd_tag): Use restic.TagLists 2020-12-29 17:12:46 +01:00
Quentin Lemaire
e88f3fb80c fix(cmd_backup): Use restic.TagLists 2020-12-29 17:12:46 +01:00
Alexander Neumann
b2efa0af39 Merge pull request #3164 from MichaelEischer/improve-context-cancel
Improve context cancel handling in archiver and backends
2020-12-29 17:03:42 +01:00
Michael Eischer
25f4acdaa8 Add changelog entry 2020-12-29 16:32:18 +01:00
Michael Eischer
cff4955a48 ui: remove dead struct member 2020-12-29 16:32:18 +01:00
Michael Eischer
05a987b07c Support values less than 1 for RESTIC_PROGRESS_FPS
For example set the variable to 0.016666 to print the progress once per
minute.
2020-12-29 16:32:18 +01:00
Michael Eischer
92da5168e1 ui: force backup progress update on signal 2020-12-29 16:32:18 +01:00
Michael Eischer
34afc93ddc ui/progress: extract signal handling into own package 2020-12-29 16:32:18 +01:00
Michael Eischer
023eea6463 ui: don't shorten non-interactive progress output 2020-12-29 16:32:18 +01:00
Michael Eischer
684600cf42 ui: update status for the backup command on non-interactive terminals
Allow the backup command to print status on non-interactive terminals.
The output is disabled by setting a MinUpdatePause == 0.
2020-12-29 16:03:43 +01:00
Michael Eischer
85fe5feadb Unify progress report frequency calculation 2020-12-29 16:03:43 +01:00
Michael Eischer
969141b5e9 Honor RESTIC_PROGRESS_FPS env variable on non-interactive terminals
This makes it possible to use the environment variable to also get
regular progress updates on non-interactive terminals.
2020-12-29 16:03:43 +01:00
Michael Eischer
13ce981794 ui: cleanup backup status shutdown 2020-12-29 16:03:43 +01:00
Michael Eischer
c2ef049f1b ui/progress: don't print progress on non-interactive terminals
This reverts to the old behavior of not printing progress updates on
non-interactive terminals. It was accidentally changed in #3058.
2020-12-29 16:03:43 +01:00
MichaelEischer
a488d4c847 Merge pull request #2833 from greatroar/aix
AIX port
2020-12-29 12:32:50 +01:00
Alexander Neumann
4133b1ea65 Synchronize OS and architectures for testing 2020-12-29 11:11:50 +01:00
Alexander Neumann
46d2ca5095 Remove old test script 2020-12-29 11:02:48 +01:00
Quentin Lemaire
334d8ce724 feat(tags): Create Flatten() method 2020-12-29 10:59:46 +01:00
Alexander Neumann
c661518df9 Merge pull request #2793 from greatroar/rclone-doc
Clarify rclone-over-SSH docs
2020-12-29 10:46:34 +01:00
Michael Eischer
0d81f16343 Add AIX as cross-compile target to CI 2020-12-29 01:35:01 +01:00
greatroar
3b09ae9074 AIX port 2020-12-29 01:35:01 +01:00
greatroar
18531e3d6f Portability fixes to internal/restic
syscall.Mknod is not available on AIX.
2020-12-29 01:35:01 +01:00
Michael Eischer
ca07317815 add changelog 2020-12-28 21:06:47 +01:00
Michael Eischer
d0ca8fb0b8 backend: test that a canceled context prevents RetryBackend operations 2020-12-28 21:06:47 +01:00
Michael Eischer
08b7f2b58d archiver: test that context canceled error is not dropped 2020-12-28 21:06:47 +01:00
Michael Eischer
e483b63c40 retrybackend: Fail operations when context is already canceled
Depending on the used backend, operations started with a canceled
context may fail or not. For example the local backend still works in
large parts when called with a canceled context. Backends transfering
data via http don't work. It is also not possible to retry failed
operations in that state as the RetryBackend will abort with a 'context
canceled' error.

Ensure uniform behavior of all backends by checking for a canceled
context by checking for a canceled context as a first step in the
RetryBackend. This ensures uniform behavior across all backends, as
backends are always wrapped in a RetryBackend.
2020-12-28 21:06:47 +01:00
Michael Eischer
fc60b560ba archiver: Let saveTree report a canceled context as an error
If the context was canceled then saveTree might receive a treeID or not
depending on the timing. This could cause saveTree to incorrectly return
a nil treeID as valid. Fix this always returning an error when the
context was canceled in the meantime.
2020-12-28 21:06:47 +01:00
Michael Eischer
736e964317 archiver: Don't loose error if background context is canceled
A canceled background context lets the blob/tree/fileSavers exit
without reporting an error. The error handling previously replaced
a 'context canceled' error received by the main backup method with
the error reported by the savers. However, in case of a canceled
background context that error is nil, causing restic to loose the
error and save a snapshot with a nil tree.
2020-12-28 21:06:47 +01:00
MichaelEischer
9c41e4a343 Merge pull request #3192 from DRON-666/fix-options
Fix formatting of `restic options` command
2020-12-23 22:53:40 +01:00
DRON-666
332b1896d1 Some options fixes
Add tests for bool type.
Fix subtle bug in TestOptionsApplyInvalid.
Fix options list formatting.
2020-12-23 23:26:04 +03:00
MichaelEischer
cb6b0f6255 Merge pull request #3181 from tWido/feature-no-retry-permission
Don't retry when "Permission denied" occurs in local backend
2020-12-23 20:24:29 +01:00
MichaelEischer
1e73aac610 Merge pull request #3179 from aawsome/check-filesize
check: Remove filesize counter
2020-12-23 20:22:02 +01:00
Alexander Weiss
2a1add7538 check: remove file size counter 2020-12-23 02:34:31 +01:00
tWido
7dab113035 Don't retry when "Permission denied" occurs in local backend 2020-12-22 23:41:12 +01:00
MichaelEischer
8efb874f48 Merge pull request #3148 from aawsome/simplify-index-rebuild
rebuild-index: code simplification
2020-12-22 23:27:53 +01:00
Michael Eischer
de99207046 repository: tweak comment for packs method 2020-12-22 23:01:58 +01:00
Alexander Weiss
d8d2cc6dd9 Simplify rebuild-index code 2020-12-22 23:01:58 +01:00
Alexander Weiss
68b74e359e Count packs directly in RebuildIndexFiles 2020-12-22 23:01:58 +01:00
Michael Eischer
b9f5d3fe13 repository: Add test for ForAllIndexes 2020-12-22 22:36:18 +01:00
Michael Eischer
a12c5f1d37 repository: move otherwise unused LoadIndex to tests 2020-12-22 22:36:18 +01:00
Michael Eischer
24474a36f4 repository: deduplicate index loading implementation 2020-12-22 22:36:18 +01:00
Michael Eischer
ccc84af73d debug/list: parallelize index loading 2020-12-22 22:36:18 +01:00
Michael Eischer
96904f8972 check: extract parallel index loading 2020-12-22 22:36:18 +01:00
MichaelEischer
69f9d269eb Merge pull request #3189 from restic/fix-counter-segfault
ui/progress: Use mutex instead of atomic
2020-12-22 22:14:08 +01:00
rawtaz
ec59c73489 Merge pull request #3188 from restic/forget-prune-dry-run
forget: Enable --dry-run together with --prune
2020-12-22 21:09:15 +01:00
Alexander Neumann
6c514adb8a ui/progress: Use mutex instead of atomic
The counter value needs to be aligned to 64 bit in memory for the
atomic functions to work on some platform (such as 32 bit ARM).

The atomic package says in its documentation:

> These functions require great care to be used correctly. Except for
> special, low-level applications, synchronization is better done with
> channels or the facilities of the sync package.

This commit replaces the atomic functions with a simple sync.Mutex, so
we don't have to care about alignment.
2020-12-22 21:03:27 +01:00
Alexander Neumann
edf89e1c74 forget: Enable --dry-run together with --prune 2020-12-22 20:58:02 +01:00
Alexander Neumann
f7c7c2f730 Merge pull request #3175 from MichaelEischer/fix-3099-changelog
Fix changelog for #3099
2020-12-20 11:29:57 +01:00
Michael Eischer
cfea79d0c5 prune: don't print stacktrace on console 2020-12-19 14:28:48 +01:00
Michael Eischer
5cd40f8b58 Add changelog 2020-12-19 11:38:38 +01:00
MichaelEischer
d32949ee54 Merge pull request #3081 from DRON-666/dump-zip
Add zip support to the `dump` command
2020-12-19 11:33:33 +01:00
DRON-666
83b10dbb12 Deduplicate dumper closing logic 2020-12-19 02:42:46 +03:00
DRON-666
e136dd8696 Deduplicate test code 2020-12-19 02:06:54 +03:00
DRON-666
33adb58817 Minor fixes and linter suggestions 2020-12-19 02:04:17 +03:00
DRON-666
da9053b184 Some gramma fixes in documentation 2020-12-19 01:16:15 +03:00
DRON-666
ef1aeb8724 dump: Update docs and changelog 2020-12-19 01:09:47 +03:00
DRON-666
2ca76afc2b dump: Add new option --archive 2020-12-19 01:09:47 +03:00
DRON-666
89ab6d557e dump: Add zip dumper 2020-12-19 01:09:47 +03:00
DRON-666
0256f95994 dump: Split tar and walk logic 2020-12-19 01:09:47 +03:00
Michael Eischer
bfadc82a20 Fix changelog for #3099 2020-12-18 20:58:24 +01:00
Michael Eischer
34b6130a0e restic: parallelize lock file loading 2020-12-18 20:46:16 +01:00
MichaelEischer
22260d130d Merge pull request #3170 from greatroar/dont-retry
Don't retry permanent errors in backends
2020-12-17 23:29:56 +01:00
greatroar
9341a83b05 Changelog entry for #2453 and #3170 2020-12-17 23:15:48 +01:00
greatroar
66d904c905 Make invalid handles permanent errors 2020-12-17 12:47:53 +01:00
greatroar
746dbda413 Mark "ssh exited" errors in SFTP as permanent 2020-12-17 12:43:09 +01:00
greatroar
f7784bddb3 Don't retry when "no space left on device" in local backend
Also adds relevant documentation to the restic.Backend interface.
2020-12-17 12:43:09 +01:00
Alexander Neumann
1cdd38d9e0 Merge pull request #3165 from restic/2747-doc-forget
doc: Clarify calendar boundaries for --keep-* options
2020-12-16 08:38:03 +01:00
Leo R. Lundgren
b3c0d2f45b doc: Clarify calendar boundaries for --keep-* options 2020-12-15 12:43:06 +01:00
Alexander Neumann
e96677cafb Merge pull request #3158 from MichaelEischer/support-swift-auth-id-variables
swift: Add support for id based keystone v3 auth parameters
2020-12-12 16:27:38 +01:00
Michael Eischer
1d69341e88 swift: Add support for id based keystone v3 auth parameters
This adds support for the following environment variables, which were
previously missing:

OS_USER_ID            User ID for keystone v3 authentication
OS_USER_DOMAIN_ID     User domain ID for keystone v3 authentication
OS_PROJECT_DOMAIN_ID  Project domain ID for keystone v3 authentication
OS_TRUST_ID           Trust ID for keystone v3 authentication
2020-12-11 19:22:34 +01:00
Alexander Neumann
36c5d39c2c Fix issues reported by semgrep 2020-12-11 09:41:59 +01:00
Alexander Neumann
7facc8ccc1 Merge pull request #2505 from aawsome/fix-repo-configfile
Fix repo configfile
2020-12-07 07:52:37 +01:00
Alexander Neumann
ba31c6fdaa Merge pull request #3150 from MichaelEischer/fix-windows-redir-output
termstatus: Fix canUpdateStatus detection for redirected output on windows
2020-12-06 21:17:27 +01:00
Alexander Neumann
b58799d83a Merge pull request #3152 from MichaelEischer/fix-backup-background-hang
backup: Fix shutdown hang when running in the background on linux
2020-12-06 21:16:00 +01:00
Alexander Neumann
0d5b764f90 Merge pull request #3130 from aawsome/snapshots-parallel
Make loading snapshots parallel
2020-12-06 21:08:18 +01:00
Alexander Weiss
d6b3859e48 Add changelog 2020-12-06 19:29:18 +01:00
Michael Eischer
b48f579530 termstatus: Fix canUpdateStatus detection for redirected output
The canUpdateStatus check was simplified in #2608, but it accidentally flipped
the condition. The correct check is as follows: If the output is a pipe then
restic probably runs in mintty/cygwin. In that case it's possible to
update the output status. In all other cases it isn't.

This commit inverts to condition again to offer the previous and correct
behavior.
2020-12-06 19:02:42 +01:00
Michael Eischer
401ef92c5f backup: Fix shutdown hang when running in the background
On shutdown the backup commands waits for the terminal output goroutine
to stop. However while running in the background the goroutine ignored
the canceled context.
2020-12-06 18:53:41 +01:00
Alexander Weiss
e329623771 Remove LoadAllSnapshots 2020-12-06 05:22:27 +01:00
Alexander Weiss
26f85779be Parallelize ForAllSnapshots 2020-12-06 05:09:58 +01:00
Alexander Weiss
5b9ee56335 Add ForAllSnapshots 2020-12-06 05:04:21 +01:00
MichaelEischer
3264eae9f6 Merge pull request #3149 from aawsome/fix-rebuild-index
Bugfix for rebuild-index
2020-12-05 22:18:22 +01:00
Alexander Weiss
83c8a9b058 Bugfix: packSizeFromList should save size from List() 2020-12-05 20:58:36 +01:00
Alexander Neumann
43cf301450 Merge pull request #3141 from MichaelEischer/fix-fuse-dir-owner
fuse: Properly set uid/gid for directories
2020-12-01 16:06:24 +01:00
Michael Eischer
d05c88a5d6 fuse: Properly set uid/gid for directories
In #2584 this was changed to use the uid/gid of the root node. This
would be okay for the top-level directory of a snapshot, however, this
change also applied to normal directories within a snapshot. This
change reverts the problematic part and adds a test that directory
attributes are represented correctly.
2020-11-30 23:41:49 +01:00
Alexander Neumann
058b102db0 Merge pull request #3138 from MichaelEischer/fix-debug-build 2020-11-30 12:27:51 +01:00
rawtaz
fcebc7d250 Merge pull request #3135 from restic/manual
doc: Fix misc missing/incorrect text in manual
2020-11-29 19:20:51 +01:00
MichaelEischer
f2959127b6 Merge pull request #3065 from greatroar/local-subdirs
Don't recurse in local backend's List if not required
2020-11-29 19:03:59 +01:00
Leo R. Lundgren
61460dee52 doc: Fix misc missing/incorrect text in manual 2020-11-29 18:59:24 +01:00
Michael Eischer
54a6d98945 Enable debug builds for CI 2020-11-29 18:47:00 +01:00
Michael Eischer
f72f6c9c80 Fix debug build 2020-11-29 18:44:36 +01:00
MichaelEischer
52b98f7f95 Merge pull request #3017 from greatroar/files-from0
Add backup options --files-from-verbatim and --files-from-raw
2020-11-29 18:15:21 +01:00
Alexander Neumann
04d856e601 Merge pull request #3136 from restic/rawtaz-copy-doc
doc: Emphasize double transfer and duplication in copy command
2020-11-29 13:57:10 +01:00
Alexander Neumann
a7b49c4889 Merge pull request #3119 from restic/keep-mountpoints
Keep mountpoints as empty directories for --one-file-system
2020-11-29 11:22:12 +01:00
rawtaz
a568211b98 Merge pull request #3128 from vrenaville/tzdata
[FIX] Timezone in docker image
2020-11-28 22:43:18 +01:00
Leo R. Lundgren
f70b10d0ee doc: Emphasize double transfer and duplication in copy command 2020-11-28 19:49:34 +01:00
greatroar
55bf76ba0c backup: Add --files-from-{verbatim,raw} options 2020-11-28 18:22:31 +01:00
Alexander Neumann
162117c42c Add changelog 2020-11-28 17:00:31 +01:00
Alexander Neumann
82ae942965 backup: Keep mountpoints for --one-file-system
When a file system is mounted at a directory, lstat() returns attributes
of the root node of the mounted file system, including the device ID of
the other file system. The previous code used when --one-file-system is
specified excluded the directory itself because of that.

This commit changes the code so that mountpoints are kept as empty
directories, its attributes set to the root note of the mounted file
system. The behavior mimics `tar`, which does the same.
2020-11-28 17:00:31 +01:00
Alexander Neumann
f576d3d826 Add tests 2020-11-28 17:00:31 +01:00
Alexander Neumann
037f0a4c91 Refactor device ID checking 2020-11-28 17:00:31 +01:00
Alexander Neumann
2f9346a5af Merge pull request #3125 from metalsp0rk/cat-respect-no-lock
Make restic cat respect --no-lock
2020-11-28 12:45:59 +01:00
Alexander Neumann
a3105799c9 Print warning when unlocking the repo fails 2020-11-28 11:31:25 +01:00
Kyle Brennan
666768cd17 cat: Respect --no-lock flag 2020-11-28 11:30:02 +01:00
vrenaville
adc7a6555f [FIX] Timezone in docker image 2020-11-27 08:05:19 +01:00
Alexander Neumann
9a97095a4c Merge pull request #3120 from aawsome/blob-implementation
Blob implementation
2020-11-22 20:59:30 +01:00
Alexander Weiss
aa7a5f19c2 Use BlobHandle in index methods 2020-11-22 20:41:12 +01:00
Alexander Weiss
e3013271a6 Harmonize naming 2020-11-22 20:41:12 +01:00
Alexander Weiss
92bd448691 Make BlobHandle substruct of Blob 2020-11-22 20:41:10 +01:00
Alexander Neumann
c844580e0f Merge pull request #3101 from aawsome/packsizes
Compute packsizes in MasterIndex
2020-11-22 15:49:19 +01:00
Alexander Weiss
67c938f232 Use PackSize in prune 2020-11-21 22:13:54 +01:00
Alexander Weiss
a851c53cbe Use PackSize in checker 2020-11-21 22:13:54 +01:00
Alexander Weiss
4960b841e6 Use PackSize in rebuild-index 2020-11-21 22:13:54 +01:00
Alexander Weiss
ce5d630681 Add MasterIndex.PackSize() 2020-11-21 22:13:54 +01:00
Alexander Weiss
c3ddde9e7d Return hdrSize in ListPack 2020-11-21 22:13:54 +01:00
rawtaz
cac481634c Merge pull request #3115 from johanbove/patch-1
Docs: Update 04_backup.rst
2020-11-20 11:23:03 +01:00
Johan Bové
c23b1a4cba Update 04_backup.rst
Fixed typo - _files_ are included from _folders_, not other _files_.
2020-11-20 07:52:23 +01:00
Alexander Neumann
110a32a08b Merge pull request #3113 from aawsome/fix-prune-stats-duplicates
Fix statistics in prune for duplicates
2020-11-19 20:34:11 +01:00
greatroar
8e213e82fc backend/local: replace fs.Walk with custom walker
This code is more strict in what it expects to find in the backend:
depending on the layout, either a directory full of files or a directory
full of such directories.
2020-11-19 16:46:42 +01:00
Alexander Neumann
8a150ee91f Merge pull request #3112 from MichaelEischer/reduce-progress-log-spam
Limit progress bar updates to once per second on non-terminal outputs
2020-11-19 07:54:36 +01:00
Alexander Weiss
cb5ec7ea6b Fix statistics in prune for duplicates
Note that this fix only solves the statistics problem, if
all duplicates are marked for repacking.

If not all duplicates are marked for repacking, we lack the
information which

The situation that not all duplicates are marked for repacking can occur
when using the `max-repack-size` option
2020-11-18 22:30:22 +01:00
Michael Eischer
625410f003 Limit progress bar updates to once per second on non-terminal outputs
The code accidentally checked whether stdin is a terminal instead of
stdout, the former is not relevant here as the output is printed on
stdout.
2020-11-18 22:12:07 +01:00
Alexander Neumann
75eff92b56 Merge pull request #3107 from eleith/do-not-require-bucket-permissions-for-init
do not require gs bucket permissions to init repository
2020-11-18 16:53:45 +01:00
eleith
a24e986b2b do not require gs bucket permissions to init repository
a gs service account may only have object permissions on an existing
bucket but no bucket create/get permissions.

these service accounts currently are blocked from initialization a
restic repository because restic can not determine if the bucket exists.

this PR updates the logic to assume the bucket exists when the bucket
attribute request results in a permissions denied error.

this way, restic can still initialize a repository if the service
account does have object permissions

fixes: https://github.com/restic/restic/issues/3100
2020-11-18 06:14:11 -08:00
rawtaz
6822ce8479 Merge pull request #3105 from restic/fix-init-repo-file
Allow using --repository-file in init
2020-11-17 21:54:44 +01:00
Alexander Neumann
d857fb6e59 Allow using --repository-file in init 2020-11-17 20:43:46 +01:00
rawtaz
342520b648 Merge pull request #3103 from greatroar/rephrase-pr3102
Rephrase #3095/#3102 changelog entry
2020-11-17 12:52:12 +01:00
greatroar
028f2b8c0e Rephrase #3095/#3102 changelog entry 2020-11-17 12:37:06 +01:00
rawtaz
1b6e8c888f Merge pull request #3102 from tofran/remove-rclone-drive-use-tras-default-param
Remove `--drive-use-trash=false` from rclone param
2020-11-17 11:21:20 +01:00
Alexander Neumann
5f3b802ee7 Merge pull request #3099 from MichaelEischer/check-less-memory 2020-11-16 10:11:25 +01:00
Michael Eischer
022dc35be9 Add changelog 2020-11-15 19:02:51 +01:00
Michael Eischer
1f43cac12d check: Only track data blobs when unused blobs should be reported
This improves the memory usage of check a lot as it now only has to
track tree blobs when run using the default parameters.
2020-11-15 18:43:07 +01:00
Michael Eischer
6da66c15d8 check: Simplify referenced blob tracking
The result is identical as long as the context in not canceled. However,
in that case the result is incomplete anyways.
2020-11-15 18:42:55 +01:00
Michael Eischer
3500f9490c check: Simplify blob status tracking
UnusedBlobs now directly reads the list of existing blobs from the
repository index. This removes the need for the blobStatusExists flag,
which in turn allows converting the blobRefs map into a BlobSet.
2020-11-15 18:42:42 +01:00
Michael Eischer
b8c7543a55 check: Merge 'size could not be found' and 'not found in index' errors
By construction these two errors always show up in pairs: 'size could
not be found' is printed when the blob is not found in the repository
index. That blob is also part of the `blobs` array. Later on, check
iterates over that array and checks whether the blob is marked as
existing. Which cannot be the case as that mark is generated by
iterating over the repository index.

The merged warning no longer reports the blob index within a file. That
information could also be derived by printing the affected tree using
`cat` and searching for the blob.
2020-11-15 18:41:50 +01:00
MichaelEischer
45ba456291 Merge pull request #3038 from fgma/check-data-subset-percentage
Check data subset: check random percentage subset
2020-11-15 18:24:13 +01:00
Michael Eischer
caac38ed27 check: Tweak subset percentage over 100% error message 2020-11-15 18:13:50 +01:00
Michael Eischer
15c537f9db Rename changelog entry to contain issue id 2020-11-15 18:13:50 +01:00
fgma
8f9cea8cc0 Check data subset: check random percentage subset 2020-11-15 18:13:50 +01:00
Alexander Neumann
3c0c0c132b Merge pull request #3006 from aawsome/new-rebuild-index
Reimplement rebuild-index and remove /internal/index
2020-11-15 17:48:43 +01:00
Alexander Neumann
9968220652 Merge pull request #2850 from greatroar/packer-malloc
Decrease allocation rate in internal/pack
2020-11-15 17:19:49 +01:00
kitone
0649828555 Add changelog 2020-11-15 17:09:30 +01:00
kitone
3c03b35212 Fix #1681 should not try to create the mount point if it doesn't exist, rather return an error 2020-11-15 17:09:30 +01:00
greatroar
ab2b7d7f9a Decrease allocation rate in internal/pack
internal/repository benchmark results:

name             old time/op    new time/op    delta
PackerManager-8     179ms ± 1%     181ms ± 1%   +0.78%  (p=0.009 n=10+10)

name             old speed      new speed      delta
PackerManager-8   294MB/s ± 1%   292MB/s ± 1%   -0.77%  (p=0.009 n=10+10)

name             old alloc/op   new alloc/op   delta
PackerManager-8    91.3kB ± 0%    72.2kB ± 0%  -20.92%  (p=0.000 n=9+7)

name             old allocs/op  new allocs/op  delta
PackerManager-8     1.38k ± 0%     0.76k ± 0%  -45.20%  (p=0.000 n=10+7)
2020-11-15 16:51:47 +01:00
greatroar
9a8a2cae4c Move pack testing logic to test file 2020-11-15 16:45:49 +01:00
Alexander Weiss
9607cad267 Remove internal/index 2020-11-15 07:05:09 +01:00
Alexander Weiss
3d1d5295cc Add changelog 2020-11-15 07:05:09 +01:00
Alexander Weiss
30b6a0878a Reimplement rebuild-index 2020-11-15 07:05:09 +01:00
Alexander Weiss
187c8fb259 Parallelize MasterIndex.Save() 2020-11-15 07:05:09 +01:00
Alexander Weiss
1ec628ddf5 Add extraObsolete to MasterIndex.Save 2020-11-15 07:05:09 +01:00
Alexander Weiss
5898cb341f Use CreateIndexFromPacks() in test 2020-11-15 07:05:05 +01:00
Alexander Weiss
43732bb885 Add CreateIndexFromPacks() 2020-11-15 07:04:51 +01:00
MichaelEischer
6feaf6bd1f Merge pull request #3030 from greatroar/concrete-statt
Replace restic.statT interface by concrete types
2020-11-14 23:32:17 +01:00
greatroar
c45f8ee075 Replace restic.statT interface by concrete types
name                old time/op    new time/op    delta
NodeFillUser-8        1.81µs ± 9%    1.50µs ± 5%  -17.07%  (p=0.000 n=19+20)
NodeFromFileInfo-8    1.76µs ± 4%    1.49µs ± 6%  -15.63%  (p=0.000 n=20+19)

name                old alloc/op   new alloc/op   delta
NodeFillUser-8          496B ± 0%      352B ± 0%  -29.03%  (p=0.000 n=20+20)
NodeFromFileInfo-8      496B ± 0%      352B ± 0%  -29.03%  (p=0.000 n=20+20)

name                old allocs/op  new allocs/op  delta
NodeFillUser-8          3.00 ± 0%      2.00 ± 0%  -33.33%  (p=0.000 n=20+20)
NodeFromFileInfo-8      3.00 ± 0%      2.00 ± 0%  -33.33%  (p=0.000 n=20+20)
2020-11-14 23:23:26 +01:00
MichaelEischer
3601a9b6cd Merge pull request #2690 from SkYNewZ/master
Fix #2688: Handle comma-separated list tags when using backup command
2020-11-14 23:19:42 +01:00
Michael Eischer
fdec8051ab tags: Tweak description of --add/set/remove options 2020-11-14 22:55:30 +01:00
MichaelEischer
333c5a19d4 Merge pull request #3082 from aawsome/check-sizes
Check: check sizes of packs from index and packheader
2020-11-14 22:37:42 +01:00
Quentin Lemaire
a8ad6b9a4b fix(tags): Change tags list type according to restic.TagList 2020-11-14 16:24:58 +00:00
Quentin Lemaire
b0882b3f3c fix(snapshots): Update help message to better match the 'forget' command one 2020-11-14 15:48:57 +00:00
Quentin Lemaire
e74110a833 docs: Write new entry to changelog/unreleased 2020-11-14 15:48:56 +00:00
Quentin Lemaire
ae441d3134 fix(backup): Switch tags cobra type to handle comma-separated list 2020-11-14 15:48:56 +00:00
Alexander Neumann
913a34f568 Merge pull request #3094 from restic/fix-ci-tests
Fix ci tests
2020-11-14 11:24:28 +01:00
Alexander Neumann
468612b108 Disable fuse test for macOS 2020-11-14 11:02:49 +01:00
Alexander Weiss
7eabcabf68 Adjust changelog 2020-11-14 00:42:49 +01:00
Alexander Weiss
17bb77b1f9 check: Also check blob length and offset 2020-11-14 00:42:49 +01:00
Alexander Weiss
80dcfca191 check: Check sizes computed from index and pack header 2020-11-14 00:42:49 +01:00
tofran
94a154c7ca Remove --drive-use-trash=false from rclone param
Google drive trash retention policy changed making this
no longer a good default
a go
Issue #3095
2020-11-13 22:58:48 +00:00
Alexander Neumann
219d9a62f2 Download assets from repo 2020-11-13 21:56:14 +01:00
rawtaz
e8713bc209 Merge pull request #3093 from restic/fix-diff-metadata
diff: Correctly count top-level blobs
2020-11-13 21:50:29 +01:00
Alexander Neumann
04d1983800 Merge pull request #3090 from fgma/vss-fix-386
vss: fix DeleteSnapshots() and GetSnapshotProperties() on 386
2020-11-13 21:30:34 +01:00
fgma
88208c3db2 vss: improved changelog entry 2020-11-13 21:13:17 +01:00
Alexander Neumann
59ea5a4208 diff: Correctly count top-level blobs 2020-11-13 21:11:21 +01:00
Alexander Neumann
145830005b Use old versions, otherwise tar doesn't work 2020-11-12 21:35:01 +01:00
Alexander Neumann
8ad9f88993 helpers: Improve error message 2020-11-12 20:38:31 +01:00
fgma
859d89b032 vss: add changelog file for issue 3090 2020-11-12 19:38:22 +01:00
fgma
f9223cd827 vss: fix DeleteSnapshots() and GetSnapshotProperties() on 386 2020-11-12 19:31:00 +01:00
Alexander Neumann
0be906a92f CI: Use netcologne mirror 2020-11-11 20:59:23 +01:00
rawtaz
dfb9326b1b Merge pull request #3085 from LordGaav/s3-list-objects-v1-flag
Extended option to select V1 API for ListObjects on S3 backend
2020-11-11 20:52:44 +01:00
Alexander Neumann
e4e0ce09ad CI: Update links for Windows binaries 2020-11-11 20:51:01 +01:00
Alexander Neumann
0334114865 CI: Hardcode mirror URL 2020-11-11 20:42:47 +01:00
Alexander Neumann
b1bbdcb637 Improve changelog 2020-11-11 20:30:30 +01:00
Alexander Neumann
4a0b7328ec s3: Remove dots for config description 2020-11-11 20:20:35 +01:00
Alexander Neumann
3e0456d88b Highlight that s3.list-objects-v1 is a temporary 2020-11-11 20:11:35 +01:00
Alexander Neumann
dd94174379 Merge pull request #3086 from greatroar/diff-errors
Improve error reporting from restic diff
2020-11-11 19:48:03 +01:00
greatroar
63e32c44c0 Improve error reporting from restic diff
Instead of a stacktrace, restic diff 111 222 now reports:

	Fatal: no matching ID found for prefix "111"
2020-11-11 16:40:40 +01:00
Nick Douma
f013662e3f Remove separate section on Ceph, and move s3.list-objects-v1 note to S3 section 2020-11-11 15:11:14 +01:00
Nick Douma
4320ff2bbf Add changelog entry for s3.list-objects-v1 2020-11-11 12:33:22 +01:00
Nick Douma
354b7e89cc Document the extended s3.list-objects-v1 flag in a new Ceph section 2020-11-11 12:32:46 +01:00
Nick Douma
829959390a Provide UseV1 parameter to minio.ListObjectsOptions based on s3.list-objects-v1 2020-11-11 11:54:38 +01:00
Nick Douma
ccd55d529d Add s3.list-objects-v1 extended option and default to false 2020-11-11 11:54:36 +01:00
Nick Douma
4ddcc17135 Add support for boolean extended options 2020-11-11 11:54:27 +01:00
MichaelEischer
407843c5f9 Merge pull request #3034 from hoyho/gh_master
bugfix: omit ENOTDATA for extended attributes
2020-11-09 22:38:49 +01:00
MichaelEischer
46d31ab86d Merge pull request #3058 from greatroar/counter
Replace restic.Progress with new progress.Counter (fixes two race conditions)
2020-11-09 22:19:09 +01:00
Alexander Neumann
c986823d3f Merge pull request #3048 from aawsome/check-pack-index
check: check index for packs that are read
2020-11-09 20:12:32 +01:00
Alexander Weiss
239931578c check: check index for packs that are read 2020-11-09 17:28:14 +01:00
hoyho
9df52327cc bugfix: omit ENOTDATA for extended attributes
Signed-off-by: hoyho <luohaihao@gmail.com>
2020-11-10 00:20:34 +08:00
greatroar
21b787a4d1 Stop Counters where they're constructed and started 2020-11-09 13:03:31 +01:00
greatroar
ddca699cd2 Replace restic.Progress with new progress.Counter
This fixes two race conditions while cleaning up the code.
2020-11-09 12:12:35 +01:00
rawtaz
605db3b389 Merge pull request #3077 from MichaelEischer/tame-golangci-lint
Switch to the default set of golangci-lint linters
2020-11-08 22:34:27 +01:00
Michael Eischer
8de129e12f Switch to the default set of golangci-lint linters 2020-11-08 22:17:50 +01:00
Alexander Neumann
2072f0a481 Merge pull request #3076 from restic/check-go-mod-tidy
Check that go.mod/go.sum are up to date
2020-11-08 21:37:22 +01:00
Alexander Neumann
5731e391f8 Merge pull request #3075 from restic/warn-prune-no-cache
UI: Add several hints
2020-11-08 20:37:10 +01:00
Alexander Neumann
6a0a1d1f1c Check that go.mod/go.sum are up to date 2020-11-08 20:32:25 +01:00
Alexander Neumann
a8d21b5dcf prune: Warn if no cache is present 2020-11-08 20:25:35 +01:00
Alexander Neumann
823d0afd6e backup: Always print parent snapshot info 2020-11-08 20:25:35 +01:00
Alexander Neumann
a5989707ac Move test for golang-ci 2020-11-08 19:58:16 +01:00
Alexander Neumann
3a0cfafeb5 Fix quotes 2020-11-08 19:52:37 +01:00
Alexander Neumann
c923bd957d Fix test for pull request 2020-11-08 19:51:11 +01:00
Alexander Neumann
1a3f885d3d Only run golangci-lint for PRs 2020-11-08 17:34:07 +01:00
Alexander Neumann
3bf43d7951 Merge pull request #2475 from restic/use-github-actions
Use GitHub Actions for CI
2020-11-08 17:30:12 +01:00
Alexander Neumann
561da92396 Replace badges in README 2020-11-08 17:26:59 +01:00
Alexander Neumann
5cf42884c8 Run tests on GitHub Actions 2020-11-08 16:04:57 +01:00
MichaelEischer
9e4e0077fb Merge pull request #3069 from greatroar/xattr-eintr
Update pkg/xattr to handle EINTR on Linux
2020-11-08 12:52:23 +01:00
MichaelEischer
1758da855f Merge pull request #3068 from aawsome/context-mem-backend
Return context error in mem backend
2020-11-08 12:24:12 +01:00
greatroar
15ea90feed Update pkg/xattr to handle EINTR on Linux
Updates #2659. This is a case where the stdlib will not handle EINTR for
us, even with Go 1.16. That xattr calls are directly affected can be
seen in the report for issue #2968.
2020-11-08 09:34:24 +01:00
Alexander Weiss
826cfa0533 fix context in archiver tests 2020-11-08 08:24:24 +01:00
Alexander Weiss
fef408a8bd Return context error in mem backend 2020-11-08 00:05:53 +01:00
greatroar
a2d4209322 Don't recurse in local backend's List if not required
Due to the return if !isFile, the IsDir branch in List was never taken
and subdirectories were traversed recursively.

Also replaced isFile by an IsRegular check, which has been equivalent
since Go 1.12 (golang/go@a2a3dd00c9).
2020-11-07 08:54:13 +01:00
Alexander Neumann
275f713211 Remove Travis and AppVeyor 2020-11-06 21:49:52 +01:00
MichaelEischer
4707bdb204 Merge pull request #2842 from aawsome/rebuild-index-inmem
Rebuild index in prune by using in-memory index
2020-11-06 20:51:20 +01:00
Alexander Neumann
47277c4b4c Add comments, clarify computation 2020-11-06 20:23:30 +01:00
Alexander Weiss
d2e53730d6 Add test that repo.List is only called once 2020-11-06 20:23:30 +01:00
Alexander Weiss
fd33030556 Use in-memory index to rebuild index in prune 2020-11-06 20:23:30 +01:00
Alexander Weiss
38cc4393f6 Add Masterindex.Save(); Add Index.Packs() 2020-11-06 20:23:30 +01:00
Alexander Neumann
7f6f31c34b Remove Hound 2020-11-06 11:43:58 +01:00
Alexander Neumann
164b4cb2f6 Add changelog for #3014 2020-11-06 10:05:42 +01:00
Alexander Neumann
4a9b05aff1 Merge pull request #3063 from aawsome/fix-3062
Fix #3062
2020-11-05 19:36:40 +01:00
Alexander Weiss
aaf1c44362 Fix #3062 2020-11-05 17:05:42 +01:00
Alexander Neumann
a5592e83f7 Merge pull request #3014 from ivandeex/pr-stream-reset
Fix sporadic stream reset between rclone and restic
2020-11-05 16:11:50 +01:00
Ivan Andreev
ab2790d9de Fix http2 stream reset between restic and rest backends #3014 2020-11-05 15:57:40 +03:00
Alexander Neumann
8a2a326189 Merge pull request #2535 from ncw/fix-2528
s3: add bucket-lookup parameter to select path or dns style bucket lookup
2020-11-05 12:39:34 +01:00
Nick Craig-Wood
86b5d8ffaa s3: add bucket-lookup parameter to select path or dns style bucket lookup
This is to enable restic working with Alibaba cloud

Fixes #2528
2020-11-05 12:20:10 +01:00
Alexander Neumann
636b2f2e94 Merge pull request #2941 from MichaelEischer/parallel-repack
prune: Parallelize repack step
2020-11-05 11:00:41 +01:00
Alexander Neumann
ae5302c7a8 Add comment that keepBlobs is modified 2020-11-05 10:33:38 +01:00
Alexander Neumann
866a52ad4e Remove unneeded seek
The file returned from DownloadAndHash() is already seeked to the start
of the file.
2020-11-05 10:31:49 +01:00
Alexander Neumann
a4507610a0 Fix typo 2020-11-05 10:31:49 +01:00
Alexander Neumann
7def2d8ea7 Use a non-constant seed 2020-11-05 10:31:49 +01:00
Alexander Neumann
ee0112ab3b Clarify message about expected error 2020-11-05 10:31:49 +01:00
Michael Eischer
b373f164fe prune: Parallelize repack command 2020-11-05 10:31:49 +01:00
Alexander Neumann
8a0dbe7c1a helpers: Remove old changelog files for release 2020-11-05 10:26:00 +01:00
Alexander Neumann
4e3ad8b3b1 Remove changelog files from 0.11.0 release 2020-11-05 10:22:39 +01:00
Alexander Neumann
5144141321 Merge pull request #2718 from aawsome/new-cleanup-command
Reimplementation of prune
2020-11-05 10:12:19 +01:00
Alexander Neumann
d35d279455 Set development version for 0.11.0 2020-11-05 09:41:40 +01:00
Alexander Neumann
1ca60bccfb Refactor condition for MaxRepackBytes
Don't depend on the string (opts.MaxRepackSize) for the condition,
instead check if there's a (positive) limit configured.
2020-11-03 16:42:21 +01:00
Alexander Neumann
7f86eb4ec0 Move helper function 2020-11-03 16:42:21 +01:00
Alexander Neumann
c1a3de4a6e Refactor max-unused calculation, add unlimited option
Add a callback to the PruneOptions struct which calculates the number of
bytes allowed to be unused after prune is done. This way, the logic is
closer to the option parsing code.

Also, add an explicit option `unlimited` for the use case when storage
does not matter but bandwidth and time do. Internally, this sets the
maximum number of unused bytes to MaxUint64.

Rework the documentation slightly so that no more "packs" are
mentioned and it talks about "files" instead.

Make it clear in the documentation that the percentage given to
`--max-unused` is relative to the whole repository size after pruning is
done. If specified, it must be below 100%, otherwise the repository
would contain 100% of unused data, which is pointless.

I had a hard time coming up with the correct formula to calculate the
maximum number of unused bytes based on the number of used bytes. For a
fraction `p` (0 ≤ p < 1), a repo with `u` bytes used, and the number of
unused bytes `x` the following holds:

      x ≤ p * (u+x)
    ⇔ x ≤ p*u + p*x
    ⇔ x - p*x ≤ p*u
    ⇔ x * (1-p) ≤ p*u
    ⇔ x ≤ p/(1-p) * u
2020-11-03 16:42:21 +01:00
Alexander Neumann
f8c4dd7b1a Split packe rewrite logic into two case branches
The comma is too sublte, let's split this into two separate branches.
2020-11-03 16:42:21 +01:00
Alexander Neumann
a5b80452fe Add comment that usedBlobs is modified 2020-11-03 16:42:21 +01:00
Alexander Neumann
aff1e220f5 Split struct members, add comments 2020-11-03 16:42:21 +01:00
Alexander Neumann
095155d9ce Remove RepackSmall 2020-11-03 16:42:21 +01:00
Alexander Neumann
1dd9fdce74 Reword changelog slightly 2020-11-03 16:42:21 +01:00
Alexander Weiss
b2f5381737 Make realistic forget --prune --dryrun 2020-11-03 16:42:21 +01:00
Alexander Weiss
7f9a0a5907 Reimplementation of prune 2020-11-03 16:42:21 +01:00
Alexander Weiss
3b591ed987 Add Verboseff 2020-11-03 16:42:21 +01:00
Alexander Weiss
ce7d613749 Add Blob.Handle() 2020-11-03 16:42:21 +01:00
Alexander Weiss
581d90cf91 Make some pack parameters public 2020-11-03 16:42:21 +01:00
greatroar
55c3a90a0d Clarify rclone-over-SSH docs
Also added a link to S. Ruderich's blog post explaining append-only
repos using rclone and SSH.
2020-06-17 15:22:20 +02:00
Alexander Weiss
d3c59d18e5 Fix inconsistency of saving/loading config file
Fix saving/loading config file: Always set ID to a zero ID.
2020-06-13 16:30:23 +02:00
247 changed files with 7489 additions and 4527 deletions

258
.github/workflows/tests.yml vendored Normal file
View File

@@ -0,0 +1,258 @@
name: test
on:
# run tests on push to master, but not when other branches are pushed to
push:
branches:
- master
# run tests for all pull requests
pull_request:
jobs:
test:
strategy:
matrix:
# list of jobs to run:
include:
- job_name: Windows
go: 1.15.x
os: windows-latest
- job_name: macOS
go: 1.15.x
os: macOS-latest
test_fuse: false
- job_name: Linux
go: 1.15.x
os: ubuntu-latest
test_cloud_backends: true
test_fuse: true
check_changelog: true
- job_name: Linux
go: 1.14.x
os: ubuntu-latest
test_fuse: true
- job_name: Linux
go: 1.13.x
os: ubuntu-latest
test_fuse: true
name: ${{ matrix.job_name }} Go ${{ matrix.go }}
runs-on: ${{ matrix.os }}
env:
GOPROXY: https://proxy.golang.org
steps:
- name: Set up Go ${{ matrix.go }}
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go }}
- name: Get programs (Linux/macOS)
run: |
echo "build Go tools"
go get github.com/restic/rest-server/...
echo "install minio server"
mkdir $HOME/bin
if [ "$RUNNER_OS" == "macOS" ]; then
wget --no-verbose -O $HOME/bin/minio https://dl.minio.io/server/minio/release/darwin-amd64/minio
else
wget --no-verbose -O $HOME/bin/minio https://dl.minio.io/server/minio/release/linux-amd64/minio
fi
chmod 755 $HOME/bin/minio
echo "install rclone"
if [ "$RUNNER_OS" == "macOS" ]; then
wget --no-verbose -O rclone.zip https://downloads.rclone.org/rclone-current-osx-amd64.zip
else
wget --no-verbose -O rclone.zip https://downloads.rclone.org/rclone-current-linux-amd64.zip
fi
unzip rclone.zip
cp rclone*/rclone $HOME/bin
chmod 755 $HOME/bin/rclone
rm -rf rclone*
# add $HOME/bin to path ($GOBIN was already added to the path by setup-go@v2)
echo $HOME/bin >> $GITHUB_PATH
if: matrix.os == 'ubuntu-latest' || matrix.os == 'macOS-latest'
- name: Get programs (Windows)
shell: powershell
run: |
$ProgressPreference = 'SilentlyContinue'
echo "build Go tools"
go get github.com/restic/rest-server/...
echo "install minio server"
mkdir $Env:USERPROFILE/bin
Invoke-WebRequest https://dl.minio.io/server/minio/release/windows-amd64/minio.exe -OutFile $Env:USERPROFILE/bin/minio.exe
echo "install rclone"
Invoke-WebRequest https://downloads.rclone.org/rclone-current-windows-amd64.zip -OutFile rclone.zip
unzip rclone.zip
copy rclone*/rclone.exe $Env:USERPROFILE/bin
# add $USERPROFILE/bin to path ($GOBIN was already added to the path by setup-go@v2)
echo $Env:USERPROFILE\bin >> $Env:GITHUB_PATH
echo "install tar"
cd $env:USERPROFILE
mkdir tar
cd tar
# install exactly these versions of tar and the libraries, other combinations might not work!
Invoke-WebRequest https://github.com/restic/test-assets/raw/master/tar-1.13-1-bin.zip -OutFile tar.zip
unzip tar.zip
Invoke-WebRequest https://github.com/restic/test-assets/raw/master/libintl-0.11.5-2-bin.zip -OutFile libintl.zip
unzip libintl.zip
Invoke-WebRequest https://github.com/restic/test-assets/raw/master/libiconv-1.8-1-bin.zip -OutFile libiconv.zip
unzip libiconv.zip
# add $USERPROFILE/tar/bin to path
echo $Env:USERPROFILE\tar\bin >> $Env:GITHUB_PATH
if: matrix.os == 'windows-latest'
- name: Check out code
uses: actions/checkout@v2
- name: Build with build.go
run: |
go run build.go
- name: Run local Tests
env:
RESTIC_TEST_FUSE: ${{ matrix.test_fuse }}
run: |
go test -cover ./...
- name: Test cloud backends
env:
RESTIC_TEST_S3_KEY: ${{ secrets.RESTIC_TEST_S3_KEY }}
RESTIC_TEST_S3_SECRET: ${{ secrets.RESTIC_TEST_S3_SECRET }}
RESTIC_TEST_S3_REPOSITORY: ${{ secrets.RESTIC_TEST_S3_REPOSITORY }}
RESTIC_TEST_AZURE_ACCOUNT_NAME: ${{ secrets.RESTIC_TEST_AZURE_ACCOUNT_NAME }}
RESTIC_TEST_AZURE_ACCOUNT_KEY: ${{ secrets.RESTIC_TEST_AZURE_ACCOUNT_KEY }}
RESTIC_TEST_AZURE_REPOSITORY: ${{ secrets.RESTIC_TEST_AZURE_REPOSITORY }}
RESTIC_TEST_B2_ACCOUNT_ID: ${{ secrets.RESTIC_TEST_B2_ACCOUNT_ID }}
RESTIC_TEST_B2_ACCOUNT_KEY: ${{ secrets.RESTIC_TEST_B2_ACCOUNT_KEY }}
RESTIC_TEST_B2_REPOSITORY: ${{ secrets.RESTIC_TEST_B2_REPOSITORY }}
RESTIC_TEST_GS_REPOSITORY: ${{ secrets.RESTIC_TEST_GS_REPOSITORY }}
RESTIC_TEST_GS_PROJECT_ID: ${{ secrets.RESTIC_TEST_GS_PROJECT_ID }}
GOOGLE_PROJECT_ID: ${{ secrets.RESTIC_TEST_GS_PROJECT_ID }}
RESTIC_TEST_GS_APPLICATION_CREDENTIALS_B64: ${{ secrets.RESTIC_TEST_GS_APPLICATION_CREDENTIALS_B64 }}
RESTIC_TEST_OS_AUTH_URL: ${{ secrets.RESTIC_TEST_OS_AUTH_URL }}
RESTIC_TEST_OS_TENANT_NAME: ${{ secrets.RESTIC_TEST_OS_TENANT_NAME }}
RESTIC_TEST_OS_USERNAME: ${{ secrets.RESTIC_TEST_OS_USERNAME }}
RESTIC_TEST_OS_PASSWORD: ${{ secrets.RESTIC_TEST_OS_PASSWORD }}
RESTIC_TEST_OS_REGION_NAME: ${{ secrets.RESTIC_TEST_OS_REGION_NAME }}
RESTIC_TEST_SWIFT: ${{ secrets.RESTIC_TEST_SWIFT }}
# fail if any of the following tests cannot be run
RESTIC_TEST_DISALLOW_SKIP: "restic/backend/rest.TestBackendREST,\
restic/backend/sftp.TestBackendSFTP,\
restic/backend/s3.TestBackendMinio,\
restic/backend/rclone.TestBackendRclone,\
restic/backend/s3.TestBackendS3,\
restic/backend/swift.TestBackendSwift,\
restic/backend/b2.TestBackendB2,\
restic/backend/gs.TestBackendGS,\
restic/backend/azure.TestBackendAzure"
run: |
# prepare credentials for Google Cloud Storage tests in a temp file
export GOOGLE_APPLICATION_CREDENTIALS=$(mktemp --tmpdir restic-gcs-auth-XXXXXXX)
echo $RESTIC_TEST_GS_APPLICATION_CREDENTIALS_B64 | base64 -d > $GOOGLE_APPLICATION_CREDENTIALS
go test -cover -parallel 4 ./internal/backend/...
# only run cloud backend tests for pull requests from and pushes to our
# own repo, otherwise the secrets are not available
if: (github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository) && matrix.test_cloud_backends
- name: Check changelog files with calens
run: |
echo "install calens"
go get github.com/restic/calens
echo "check changelog files"
calens
if: matrix.check_changelog
cross_compile:
strategy:
# ATTENTION: the list of architectures must be in sync with helpers/build-release-binaries/main.go!
matrix:
# run cross-compile in two batches parallel so the overall tests run faster
targets:
- "linux/386 linux/amd64 linux/arm linux/arm64 linux/ppc64le linux/mips linux/mipsle linux/mips64 linux/mips64le \
openbsd/386 openbsd/amd64"
- "freebsd/386 freebsd/amd64 freebsd/arm \
aix/ppc64 \
darwin/amd64 \
netbsd/386 netbsd/amd64 \
windows/386 windows/amd64 \
solaris/amd64"
env:
go: 1.15.x
GOPROXY: https://proxy.golang.org
runs-on: ubuntu-latest
name: Cross Compile for ${{ matrix.targets }}
steps:
- name: Set up Go ${{ env.go }}
uses: actions/setup-go@v2
with:
go-version: ${{ env.go }}
- name: Check out code
uses: actions/checkout@v2
- name: Install gox
run: |
go get github.com/mitchellh/gox
- name: Cross-compile with gox for ${{ matrix.targets }}
env:
GOFLAGS: "-trimpath"
GOX_ARCHS: "${{ matrix.targets }}"
run: |
mkdir build-output
gox -parallel 2 -verbose -osarch "$GOX_ARCHS" -output "build-output/{{.Dir}}_{{.OS}}_{{.Arch}}" ./cmd/restic
gox -parallel 2 -verbose -osarch "$GOX_ARCHS" -tags debug -output "build-output/{{.Dir}}_{{.OS}}_{{.Arch}}_debug" ./cmd/restic
lint:
name: lint
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v2
- name: golangci-lint
uses: golangci/golangci-lint-action@v2
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.36
# Optional: show only new issues if it's a pull request. The default value is `false`.
only-new-issues: true
args: --verbose --timeout 5m
# only run golangci-lint for pull requests, otherwise ALL hints get
# reported. We need to slowly address all issues until we can enable
# linting the master branch :)
if: github.event_name == 'pull_request'
- name: Check go.mod/go.sum
run: |
echo "check if go.mod and go.sum are up to date"
go mod tidy
git diff --exit-code go.mod go.sum

57
.golangci.yml Normal file
View File

@@ -0,0 +1,57 @@
# This is the configuration for golangci-lint for the restic project.
#
# A sample config with all settings is here:
# https://github.com/golangci/golangci-lint/blob/master/.golangci.example.yml
linters:
# only enable the linters listed below
disable-all: true
enable:
# make sure all errors returned by functions are handled
- errcheck
# find unused code
- deadcode
# show how code can be simplified
- gosimple
# # make sure code is formatted
- gofmt
# examine code and report suspicious constructs, such as Printf calls whose
# arguments do not align with the format string
- govet
# make sure names and comments are used according to the conventions
- golint
# detect when assignments to existing variables are not used
- ineffassign
# run static analysis and find errors
- staticcheck
# find unused variables, functions, structs, types, etc.
- unused
# find unused struct fields
- structcheck
# find unused global variables
- varcheck
# parse and typecheck code
- typecheck
issues:
# don't use the default exclude rules, this hides (among others) ignored
# errors from Close() calls
exclude-use-default: false
# list of things to not warn about
exclude:
# golint: do not warn about missing comments for exported stuff
- exported (function|method|var|type|const) `.*` should have comment or be unexported
# golint: ignore constants in all caps
- don't use ALL_CAPS in Go names; use CamelCase

View File

@@ -1,2 +0,0 @@
go:
enabled: true

View File

@@ -1,58 +0,0 @@
language: go
sudo: false
matrix:
include:
- os: linux
go: "1.13.x"
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
cache:
directories:
- $HOME/.cache/go-build
- $HOME/gopath/pkg/mod
- os: linux
go: "1.14.x"
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
cache:
directories:
- $HOME/.cache/go-build
- $HOME/gopath/pkg/mod
# only run fuse and cloud backends tests on Travis for the latest Go on Linux
- os: linux
go: "1.15.x"
sudo: true
cache:
directories:
- $HOME/.cache/go-build
- $HOME/gopath/pkg/mod
- os: osx
go: "1.15.x"
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
cache:
directories:
- $HOME/Library/Caches/go-build
- $HOME/gopath/pkg/mod
branches:
only:
- master
notifications:
irc:
channels:
- "chat.freenode.net#restic"
on_success: change
on_failure: change
skip_join: true
install:
- go version
- export GOBIN="$GOPATH/bin"
- export PATH="$PATH:$GOBIN"
- go env
script:
- go run run_integration_tests.go

View File

@@ -1,3 +1,500 @@
Changelog for restic 0.12.0 (2021-02-14)
=======================================
The following sections list the changes in restic 0.12.0 relevant to
restic users. The changes are ordered by importance.
Summary
-------
* Fix #1681: Make `mount` not create missing mount point directory
* Fix #1800: Ignore `no data available` filesystem error during backup
* Fix #2563: Report the correct owner of directories in FUSE mounts
* Fix #2688: Make `backup` and `tag` commands separate tags by comma
* Fix #2739: Make the `cat` command respect the `--no-lock` option
* Fix #3087: The `--use-fs-snapshot` option now works on windows/386
* Fix #3100: Do not require gs bucket permissions when running `init`
* Fix #3111: Correctly detect output redirection for `backup` command on Windows
* Fix #3151: Don't create invalid snapshots when `backup` is interrupted
* Fix #3166: Improve error handling in the `restore` command
* Fix #3232: Correct statistics for overlapping targets
* Fix #3014: Fix sporadic stream reset between rclone and restic
* Fix #3152: Do not hang until foregrounded when completed in background
* Fix #3249: Improve error handling in `gs` backend
* Chg #3095: Deleting files on Google Drive now moves them to the trash
* Enh #2186: Allow specifying percentage in `check --read-data-subset`
* Enh #2453: Report permanent/fatal backend errors earlier
* Enh #2528: Add Alibaba/Aliyun OSS support in the `s3` backend
* Enh #2706: Configurable progress reports for non-interactive terminals
* Enh #2944: Add `backup` options `--files-from-{verbatim,raw}`
* Enh #3083: Allow usage of deprecated S3 `ListObjects` API
* Enh #3147: Support additional environment variables for Swift authentication
* Enh #3191: Add release binaries for MIPS architectures
* Enh #909: Back up mountpoints as empty directories
* Enh #3250: Add several more error checks
* Enh #2718: Improve `prune` performance and make it more customizable
* Enh #2495: Add option to let `backup` trust mtime without checking ctime
* Enh #2941: Speed up the repacking step of the `prune` command
* Enh #3006: Speed up the `rebuild-index` command
* Enh #3048: Add more checks for index and pack files in the `check` command
* Enh #2433: Make the `dump` command support `zip` format
* Enh #3099: Reduce memory usage of `check` command
* Enh #3106: Parallelize scan of snapshot content in `copy` and `prune`
* Enh #3130: Parallelize reading of locks and snapshots
* Enh #3254: Enable HTTP/2 for backend connections
Details
-------
* Bugfix #1681: Make `mount` not create missing mount point directory
When specifying a non-existent directory as mount point for the `mount` command, restic used
to create the specified directory automatically.
This has now changed such that restic instead gives an error when the specified directory for
the mount point does not exist.
https://github.com/restic/restic/issues/1681
https://github.com/restic/restic/pull/3008
* Bugfix #1800: Ignore `no data available` filesystem error during backup
Restic was unable to backup files on some filesystems, for example certain configurations of
CIFS on Linux which return a `no data available` error when reading extended attributes. These
errors are now ignored.
https://github.com/restic/restic/issues/1800
https://github.com/restic/restic/pull/3034
* Bugfix #2563: Report the correct owner of directories in FUSE mounts
Restic 0.10.0 changed the FUSE mount to always report the current user as the owner of
directories within the FUSE mount, which is incorrect.
This is now changed back to reporting the correct owner of a directory.
https://github.com/restic/restic/issues/2563
https://github.com/restic/restic/pull/3141
* Bugfix #2688: Make `backup` and `tag` commands separate tags by comma
Running `restic backup --tag foo,bar` previously created snapshots with one single tag
containing a comma (`foo,bar`) instead of two tags (`foo`, `bar`).
Similarly, the `tag` command's `--set`, `--add` and `--remove` options would treat
`foo,bar` as one tag instead of two tags. This was inconsistent with other commands and often
unexpected when one intended `foo,bar` to mean two tags.
To be consistent in all commands, restic now interprets `foo,bar` to mean two separate tags
(`foo` and `bar`) instead of one tag (`foo,bar`) everywhere, including in the `backup` and
`tag` commands.
NOTE: This change might result in unexpected behavior in cases where you use the `forget`
command and filter on tags like `foo,bar`. Snapshots previously backed up with `--tag
foo,bar` will still not match that filter, but snapshots saved from now on will match that
filter.
To replace `foo,bar` tags with `foo` and `bar` tags in old snapshots, you can first generate a
list of the relevant snapshots using a command like:
Restic snapshots --json --quiet | jq '.[] | select(contains({tags: ["foo,bar"]})) | .id'
And then use `restic tag --set foo --set bar snapshotID [...]` to set the new tags. Please adjust
the commands to include real tag names and any additional tags, as well as the list of snapshots
to process.
https://github.com/restic/restic/issues/2688
https://github.com/restic/restic/pull/2690
https://github.com/restic/restic/pull/3197
* Bugfix #2739: Make the `cat` command respect the `--no-lock` option
The `cat` command would not respect the `--no-lock` flag. This is now fixed.
https://github.com/restic/restic/issues/2739
* Bugfix #3087: The `--use-fs-snapshot` option now works on windows/386
Restic failed to create VSS snapshots on windows/386 with the following error:
GetSnapshotProperties() failed: E_INVALIDARG (0x80070057)
This is now fixed.
https://github.com/restic/restic/issues/3087
https://github.com/restic/restic/pull/3090
* Bugfix #3100: Do not require gs bucket permissions when running `init`
Restic used to require bucket level permissions for the `gs` backend in order to initialize a
restic repository.
It now allows a `gs` service account to initialize a repository if the bucket does exist and the
service account has permissions to write/read to that bucket.
https://github.com/restic/restic/issues/3100
* Bugfix #3111: Correctly detect output redirection for `backup` command on Windows
On Windows, since restic 0.10.0 the `backup` command did not properly detect when the output
was redirected to a file. This caused restic to output terminal control characters. This has
been fixed by correcting the terminal detection.
https://github.com/restic/restic/issues/3111
https://github.com/restic/restic/pull/3150
* Bugfix #3151: Don't create invalid snapshots when `backup` is interrupted
When canceling a backup run at a certain moment it was possible that restic created a snapshot
with an invalid "null" tree. This caused `check` and other operations to fail. The `backup`
command now properly handles interruptions and never saves a snapshot when interrupted.
https://github.com/restic/restic/issues/3151
https://github.com/restic/restic/pull/3164
* Bugfix #3166: Improve error handling in the `restore` command
The `restore` command used to not print errors while downloading file contents from the
repository. It also incorrectly exited with a zero error code even when there were errors
during the restore process. This has all been fixed and `restore` now returns with a non-zero
exit code when there's an error.
https://github.com/restic/restic/issues/3166
https://github.com/restic/restic/pull/3207
* Bugfix #3232: Correct statistics for overlapping targets
A user reported that restic's statistics and progress information during backup was not
correctly calculated when the backup targets (files/dirs to save) overlap. For example,
consider a directory `foo` which contains (among others) a file `foo/bar`. When `restic
backup foo foo/bar` was run, restic counted the size of the file `foo/bar` twice, so the
completeness percentage as well as the number of files was wrong. This is now corrected.
https://github.com/restic/restic/issues/3232
https://github.com/restic/restic/pull/3243
* Bugfix #3014: Fix sporadic stream reset between rclone and restic
Sometimes when using restic with the `rclone` backend, an error message similar to the
following would be printed:
Didn't finish writing GET request (wrote 0/xxx): http2: stream closed
It was found that this was caused by restic closing the connection to rclone to soon when
downloading data. A workaround has been added which waits for the end of the download before
closing the connection.
https://github.com/rclone/rclone/issues/2598
https://github.com/restic/restic/pull/3014
* Bugfix #3152: Do not hang until foregrounded when completed in background
On Linux, when running in the background restic failed to stop the terminal output of the
`backup` command after it had completed. This caused restic to hang until moved to the
foreground. This has now been fixed.
https://github.com/restic/restic/pull/3152
https://forum.restic.net/t/restic-alpine-container-cron-hangs-epoll-pwait/3334
* Bugfix #3249: Improve error handling in `gs` backend
The `gs` backend did not notice when the last step of completing a file upload failed. Under rare
circumstances, this could cause missing files in the backup repository. This has now been
fixed.
https://github.com/restic/restic/pull/3249
* Change #3095: Deleting files on Google Drive now moves them to the trash
When deleting files on Google Drive via the `rclone` backend, restic used to bypass the trash
folder required that one used the `-o rclone.args` option to enable usage of the trash folder.
This ensured that deleted files in Google Drive were not kept indefinitely in the trash folder.
However, since Google Drive's trash retention policy changed to deleting trashed files after
30 days, this is no longer needed.
Restic now leaves it up to rclone and its configuration to use or not use the trash folder when
deleting files. The default is to use the trash folder, as of rclone 1.53.2. To re-enable the
restic 0.11 behavior, set the `RCLONE_DRIVE_USE_TRASH` environment variable or change the
rclone configuration. See the rclone documentation for more details.
https://github.com/restic/restic/issues/3095
https://github.com/restic/restic/pull/3102
* Enhancement #2186: Allow specifying percentage in `check --read-data-subset`
We've enhanced the `check` command's `--read-data-subset` option to also accept a
percentage (e.g. `2.5%` or `10%`). This will check the given percentage of pack files (which
are randomly selected on each run).
https://github.com/restic/restic/issues/2186
https://github.com/restic/restic/pull/3038
* Enhancement #2453: Report permanent/fatal backend errors earlier
When encountering errors in reading from or writing to storage backends, restic retries the
failing operation up to nine times (for a total of ten attempts). It used to retry all backend
operations, but now detects some permanent error conditions so that it can report fatal errors
earlier.
Permanent failures include local disks being full, SSH connections dropping and permission
errors.
https://github.com/restic/restic/issues/2453
https://github.com/restic/restic/issues/3180
https://github.com/restic/restic/pull/3170
https://github.com/restic/restic/pull/3181
* Enhancement #2528: Add Alibaba/Aliyun OSS support in the `s3` backend
A new extended option `s3.bucket-lookup` has been added to support Alibaba/Aliyun OSS in the
`s3` backend. The option can be set to one of the following values:
- `auto` - Existing behaviour - `dns` - Use DNS style bucket access - `path` - Use path style
bucket access
To make the `s3` backend work with Alibaba/Aliyun OSS you must set `s3.bucket-lookup` to `dns`
and set the `s3.region` parameter. For example:
Restic -o s3.bucket-lookup=dns -o s3.region=oss-eu-west-1 -r
s3:https://oss-eu-west-1.aliyuncs.com/bucketname init
Note that `s3.region` must be set, otherwise the MinIO SDK tries to look it up and it seems that
Alibaba doesn't support that properly.
https://github.com/restic/restic/issues/2528
https://github.com/restic/restic/pull/2535
* Enhancement #2706: Configurable progress reports for non-interactive terminals
The `backup`, `check` and `prune` commands never printed any progress reports on
non-interactive terminals. This behavior is now configurable using the
`RESTIC_PROGRESS_FPS` environment variable. Use for example a value of `1` for an update
every second, or `0.01666` for an update every minute.
The `backup` command now also prints the current progress when restic receives a `SIGUSR1`
signal.
Setting the `RESTIC_PROGRESS_FPS` environment variable or sending a `SIGUSR1` signal
prints a status report even when `--quiet` was specified.
https://github.com/restic/restic/issues/2706
https://github.com/restic/restic/issues/3194
https://github.com/restic/restic/pull/3199
* Enhancement #2944: Add `backup` options `--files-from-{verbatim,raw}`
The new `backup` options `--files-from-verbatim` and `--files-from-raw` read a list of
files to back up from a file. Unlike the existing `--files-from` option, these options do not
interpret the listed filenames as glob patterns; instead, whitespace in filenames is
preserved as-is and no pattern expansion is done. Please see the documentation for specifics.
These new options are highly recommended over `--files-from`, when using a script to generate
the list of files to back up.
https://github.com/restic/restic/issues/2944
https://github.com/restic/restic/issues/3013
* Enhancement #3083: Allow usage of deprecated S3 `ListObjects` API
Some S3 API implementations, e.g. Ceph before version 14.2.5, have a broken `ListObjectsV2`
implementation which causes problems for restic when using their API endpoints. When a broken
server implementation is used, restic prints errors similar to the following:
List() returned error: Truncated response should have continuation token set
As a temporary workaround, restic now allows using the older `ListObjects` endpoint by
setting the `s3.list-objects-v1` extended option, for instance:
Restic -o s3.list-objects-v1=true snapshots
Please note that this option may be removed in future versions of restic.
https://github.com/restic/restic/issues/3083
https://github.com/restic/restic/pull/3085
* Enhancement #3147: Support additional environment variables for Swift authentication
The `swift` backend now supports the following additional environment variables for passing
authentication details to restic: `OS_USER_ID`, `OS_USER_DOMAIN_ID`,
`OS_PROJECT_DOMAIN_ID` and `OS_TRUST_ID`
Depending on the `openrc` configuration file these might be required when the user and project
domains differ from one another.
https://github.com/restic/restic/issues/3147
https://github.com/restic/restic/pull/3158
* Enhancement #3191: Add release binaries for MIPS architectures
We've added a few new architectures for Linux to the release binaries: `mips`, `mipsle`,
`mips64`, and `mip64le`. MIPS is mostly used for low-end embedded systems.
https://github.com/restic/restic/issues/3191
https://github.com/restic/restic/pull/3208
* Enhancement #909: Back up mountpoints as empty directories
When the `--one-file-system` option is specified to `restic backup`, it ignores all file
systems mounted below one of the target directories. This means that when a snapshot is
restored, users needed to manually recreate the mountpoint directories.
Restic now backs up mountpoints as empty directories and therefore implements the same
approach as `tar`.
https://github.com/restic/restic/issues/909
https://github.com/restic/restic/pull/3119
* Enhancement #3250: Add several more error checks
We've added a lot more error checks in places where errors were previously ignored (as hinted by
the static analysis program `errcheck` via `golangci-lint`).
https://github.com/restic/restic/pull/3250
* Enhancement #2718: Improve `prune` performance and make it more customizable
The `prune` command is now much faster. This is especially the case for remote repositories or
repositories with not much data to remove. Also the memory usage of the `prune` command is now
reduced.
Restic used to rebuild the index from scratch after pruning. This could lead to missing packs in
the index in some cases for eventually consistent backends such as e.g. AWS S3. This behavior is
now changed and the index rebuilding uses the information already known by `prune`.
By default, the `prune` command no longer removes all unused data. This behavior can be
fine-tuned by new options, like the acceptable amount of unused space or the maximum size of
data to reorganize. For more details, please see
https://restic.readthedocs.io/en/stable/060_forget.html .
Moreover, `prune` now accepts the `--dry-run` option and also running `forget --dry-run
--prune` will show what `prune` would do.
This enhancement also fixes several open issues, e.g.: -
https://github.com/restic/restic/issues/1140 -
https://github.com/restic/restic/issues/1599 -
https://github.com/restic/restic/issues/1985 -
https://github.com/restic/restic/issues/2112 -
https://github.com/restic/restic/issues/2227 -
https://github.com/restic/restic/issues/2305
https://github.com/restic/restic/pull/2718
https://github.com/restic/restic/pull/2842
* Enhancement #2495: Add option to let `backup` trust mtime without checking ctime
The `backup` command used to require that both `ctime` and `mtime` of a file matched with a
previously backed up version to determine that the file was unchanged. In other words, if
either `ctime` or `mtime` of the file had changed, it would be considered changed and restic
would read the file's content again to back up the relevant (changed) parts of it.
The new option `--ignore-ctime` makes restic look at `mtime` only, such that `ctime` changes
for a file does not cause restic to read the file's contents again.
The check for both `ctime` and `mtime` was introduced in restic 0.9.6 to make backups more
reliable in the face of programs that reset `mtime` (some Unix archivers do that), but it turned
out to often be expensive because it made restic read file contents even if only the metadata
(owner, permissions) of a file had changed. The new `--ignore-ctime` option lets the user
restore the 0.9.5 behavior when needed. The existing `--ignore-inode` option already turned
off this behavior, but also removed a different check.
Please note that changes in files' metadata are still recorded, regardless of the command line
options provided to the backup command.
https://github.com/restic/restic/issues/2495
https://github.com/restic/restic/issues/2558
https://github.com/restic/restic/issues/2819
https://github.com/restic/restic/pull/2823
* Enhancement #2941: Speed up the repacking step of the `prune` command
The repack step of the `prune` command, which moves still used file parts into new pack files
such that the old ones can be garbage collected later on, now processes multiple pack files in
parallel. This is especially beneficial for high latency backends or when using a fast network
connection.
https://github.com/restic/restic/pull/2941
* Enhancement #3006: Speed up the `rebuild-index` command
We've optimized the `rebuild-index` command. Now, existing index entries are used to
minimize the number of pack files that must be read. This speeds up the index rebuild a lot.
Additionally, the option `--read-all-packs` has been added, implementing the previous
behavior.
https://github.com/restic/restic/pull/3006
https://github.com/restic/restic/issue/2547
* Enhancement #3048: Add more checks for index and pack files in the `check` command
The `check` command run with the `--read-data` or `--read-data-subset` options used to only
verify only the pack file content - it did not check if the blobs within the pack are correctly
contained in the index.
A check for the latter is now in place, which can print the following error:
Blob ID is not contained in index or position is incorrect
Another test is also added, which compares pack file sizes computed from the index and the pack
header with the actual file size. This test is able to detect truncated pack files.
If the index is not correct, it can be rebuilt by using the `rebuild-index` command.
Having added these tests, `restic check` is now able to detect non-existing blobs which are
wrongly referenced in the index. This situation could have lead to missing data.
https://github.com/restic/restic/pull/3048
https://github.com/restic/restic/pull/3082
* Enhancement #2433: Make the `dump` command support `zip` format
Previously, restic could dump the contents of a whole folder structure only in the `tar`
format. The `dump` command now has a new flag to change output format to `zip`. Just pass
`--archive zip` as an option to `restic dump`.
https://github.com/restic/restic/pull/2433
https://github.com/restic/restic/pull/3081
* Enhancement #3099: Reduce memory usage of `check` command
The `check` command now requires less memory if it is run without the `--check-unused` option.
https://github.com/restic/restic/pull/3099
* Enhancement #3106: Parallelize scan of snapshot content in `copy` and `prune`
The `copy` and `prune` commands used to traverse the directories of snapshots one by one to find
used data. This snapshot traversal is now parallized which can speed up this step several
times.
In addition the `check` command now reports how many snapshots have already been processed.
https://github.com/restic/restic/pull/3106
* Enhancement #3130: Parallelize reading of locks and snapshots
Restic used to read snapshots sequentially. For repositories containing many snapshots this
slowed down commands which have to read all snapshots.
Now the reading of snapshots is parallelized. This speeds up for example `prune`, `backup` and
other commands that search for snapshots with certain properties or which have to find the
`latest` snapshot.
The speed up also applies to locks stored in the backup repository.
https://github.com/restic/restic/pull/3130
https://github.com/restic/restic/pull/3174
* Enhancement #3254: Enable HTTP/2 for backend connections
Go's HTTP library usually automatically chooses between HTTP/1.x and HTTP/2 depending on
what the server supports. But for compatibility this mechanism is disabled if DialContext is
used (which is the case for restic). This change allows restic's HTTP client to negotiate
HTTP/2 if supported by the server.
https://github.com/restic/restic/pull/3254
Changelog for restic 0.11.0 (2020-11-05)
=======================================

View File

@@ -141,6 +141,14 @@ Installing the script `fmt-check` from https://github.com/edsrzf/gofmt-git-hook
locally as a pre-commit hook checks formatting before committing automatically,
just copy this script to `.git/hooks/pre-commit`.
The project is using the program
[`golangci-lint`](https://github.com/golangci/golangci-lint) to run a list of
linters and checkers. It will be run on the code when you submit a PR. In order
to check your code beforehand, you can run `golangci-lint run` manually.
Eventually, we will enable `golangci-lint` for the whole code base. For now,
you can ignore warnings printed for lines you did not modify, those will be
ignored by the CI.
For each pull request, several different systems run the integration tests on
Linux, macOS and Windows. We won't merge any code that does not pass all tests
for all systems, so when a tests fails, try to find out what's wrong and fix

View File

@@ -1,6 +1,5 @@
[![Documentation](https://readthedocs.org/projects/restic/badge/?version=latest)](https://restic.readthedocs.io/en/latest/?badge=latest)
[![Build Status Travis](https://travis-ci.com/restic/restic.svg?branch=master)](https://travis-ci.com/restic/restic)
[![Build Status AppVeyor](https://ci.appveyor.com/api/projects/status/nuy4lfbgfbytw92q/branch/master?svg=true)](https://ci.appveyor.com/project/fd0/restic/branch/master)
[![Build Status](https://github.com/restic/restic/workflows/test/badge.svg)](https://github.com/restic/restic/actions?query=workflow%3Atest)
[![Go Report Card](https://goreportcard.com/badge/github.com/restic/restic)](https://goreportcard.com/report/github.com/restic/restic)
# Introduction

View File

@@ -1 +1 @@
0.11.0
0.12.0

View File

@@ -1,32 +0,0 @@
clone_folder: c:\restic
environment:
GOPATH: c:\gopath
branches:
only:
- master
cache:
- '%LocalAppData%\go-build'
init:
- ps: >-
$app = Get-WmiObject -Class Win32_Product -Filter "Vendor = 'http://golang.org'"
if ($app) {
$app.Uninstall()
}
install:
- rmdir c:\go /s /q
- appveyor DownloadFile https://dl.google.com/go/go1.15.2.windows-amd64.msi
- msiexec /i go1.15.2.windows-amd64.msi /q
- go version
- go env
- appveyor DownloadFile https://sourceforge.netcologne.de/project/gnuwin32/tar/1.13-1/tar-1.13-1-bin.zip -FileName tar.zip
- 7z x tar.zip bin/tar.exe
- set PATH=bin/;%PATH%
build_script:
- go run run_integration_tests.go

View File

@@ -0,0 +1,10 @@
Bugfix: Make `mount` not create missing mount point directory
When specifying a non-existent directory as mount point for the `mount`
command, restic used to create the specified directory automatically.
This has now changed such that restic instead gives an error when the
specified directory for the mount point does not exist.
https://github.com/restic/restic/issues/1681
https://github.com/restic/restic/pull/3008

View File

@@ -0,0 +1,8 @@
Bugfix: Ignore `no data available` filesystem error during backup
Restic was unable to backup files on some filesystems, for example certain
configurations of CIFS on Linux which return a `no data available` error
when reading extended attributes. These errors are now ignored.
https://github.com/restic/restic/issues/1800
https://github.com/restic/restic/pull/3034

View File

@@ -0,0 +1,8 @@
Enhancement: Allow specifying percentage in `check --read-data-subset`
We've enhanced the `check` command's `--read-data-subset` option to also accept
a percentage (e.g. `2.5%` or `10%`). This will check the given percentage of
pack files (which are randomly selected on each run).
https://github.com/restic/restic/issues/2186
https://github.com/restic/restic/pull/3038

View File

@@ -0,0 +1,14 @@
Enhancement: Report permanent/fatal backend errors earlier
When encountering errors in reading from or writing to storage backends,
restic retries the failing operation up to nine times (for a total of ten
attempts). It used to retry all backend operations, but now detects some
permanent error conditions so that it can report fatal errors earlier.
Permanent failures include local disks being full, SSH connections
dropping and permission errors.
https://github.com/restic/restic/issues/2453
https://github.com/restic/restic/pull/3170
https://github.com/restic/restic/issues/3180
https://github.com/restic/restic/pull/3181

View File

@@ -0,0 +1,21 @@
Enhancement: Add Alibaba/Aliyun OSS support in the `s3` backend
A new extended option `s3.bucket-lookup` has been added to support
Alibaba/Aliyun OSS in the `s3` backend. The option can be set to one
of the following values:
- `auto` - Existing behaviour
- `dns` - Use DNS style bucket access
- `path` - Use path style bucket access
To make the `s3` backend work with Alibaba/Aliyun OSS you must set
`s3.bucket-lookup` to `dns` and set the `s3.region` parameter. For
example:
restic -o s3.bucket-lookup=dns -o s3.region=oss-eu-west-1 -r s3:https://oss-eu-west-1.aliyuncs.com/bucketname init
Note that `s3.region` must be set, otherwise the MinIO SDK tries to
look it up and it seems that Alibaba doesn't support that properly.
https://github.com/restic/restic/issues/2528
https://github.com/restic/restic/pull/2535

View File

@@ -0,0 +1,9 @@
Bugfix: Report the correct owner of directories in FUSE mounts
Restic 0.10.0 changed the FUSE mount to always report the current user
as the owner of directories within the FUSE mount, which is incorrect.
This is now changed back to reporting the correct owner of a directory.
https://github.com/restic/restic/issues/2563
https://github.com/restic/restic/pull/3141

View File

@@ -0,0 +1,31 @@
Bugfix: Make `backup` and `tag` commands separate tags by comma
Running `restic backup --tag foo,bar` previously created snapshots with one
single tag containing a comma (`foo,bar`) instead of two tags (`foo`, `bar`).
Similarly, the `tag` command's `--set`, `--add` and `--remove` options would
treat `foo,bar` as one tag instead of two tags. This was inconsistent with
other commands and often unexpected when one intended `foo,bar` to mean two
tags.
To be consistent in all commands, restic now interprets `foo,bar` to mean two
separate tags (`foo` and `bar`) instead of one tag (`foo,bar`) everywhere,
including in the `backup` and `tag` commands.
NOTE: This change might result in unexpected behavior in cases where you use
the `forget` command and filter on tags like `foo,bar`. Snapshots previously
backed up with `--tag foo,bar` will still not match that filter, but snapshots
saved from now on will match that filter.
To replace `foo,bar` tags with `foo` and `bar` tags in old snapshots, you can
first generate a list of the relevant snapshots using a command like:
restic snapshots --json --quiet | jq '.[] | select(contains({tags: ["foo,bar"]})) | .id'
and then use `restic tag --set foo --set bar snapshotID [...]` to set the new
tags. Please adjust the commands to include real tag names and any additional
tags, as well as the list of snapshots to process.
https://github.com/restic/restic/issues/2688
https://github.com/restic/restic/pull/2690
https://github.com/restic/restic/pull/3197

View File

@@ -0,0 +1,17 @@
Enhancement: Configurable progress reports for non-interactive terminals
The `backup`, `check` and `prune` commands never printed any progress
reports on non-interactive terminals. This behavior is now configurable
using the `RESTIC_PROGRESS_FPS` environment variable. Use for example a
value of `1` for an update every second, or `0.01666` for an update every
minute.
The `backup` command now also prints the current progress when restic
receives a `SIGUSR1` signal.
Setting the `RESTIC_PROGRESS_FPS` environment variable or sending a `SIGUSR1`
signal prints a status report even when `--quiet` was specified.
https://github.com/restic/restic/issues/2706
https://github.com/restic/restic/issues/3194
https://github.com/restic/restic/pull/3199

View File

@@ -0,0 +1,5 @@
Bugfix: Make the `cat` command respect the `--no-lock` option
The `cat` command would not respect the `--no-lock` flag. This is now fixed.
https://github.com/restic/restic/issues/2739

View File

@@ -0,0 +1,13 @@
Enhancement: Add `backup` options `--files-from-{verbatim,raw}`
The new `backup` options `--files-from-verbatim` and `--files-from-raw` read a
list of files to back up from a file. Unlike the existing `--files-from`
option, these options do not interpret the listed filenames as glob patterns;
instead, whitespace in filenames is preserved as-is and no pattern expansion is
done. Please see the documentation for specifics.
These new options are highly recommended over `--files-from`, when using a
script to generate the list of files to back up.
https://github.com/restic/restic/issues/2944
https://github.com/restic/restic/issues/3013

View File

@@ -0,0 +1,18 @@
Enhancement: Allow usage of deprecated S3 `ListObjects` API
Some S3 API implementations, e.g. Ceph before version 14.2.5, have a broken
`ListObjectsV2` implementation which causes problems for restic when using
their API endpoints. When a broken server implementation is used, restic prints
errors similar to the following:
List() returned error: Truncated response should have continuation token set
As a temporary workaround, restic now allows using the older `ListObjects`
endpoint by setting the `s3.list-objects-v1` extended option, for instance:
restic -o s3.list-objects-v1=true snapshots
Please note that this option may be removed in future versions of restic.
https://github.com/restic/restic/issues/3083
https://github.com/restic/restic/pull/3085

View File

@@ -0,0 +1,10 @@
Bugfix: The `--use-fs-snapshot` option now works on windows/386
Restic failed to create VSS snapshots on windows/386 with the following error:
GetSnapshotProperties() failed: E_INVALIDARG (0x80070057)
This is now fixed.
https://github.com/restic/restic/issues/3087
https://github.com/restic/restic/pull/3090

View File

@@ -0,0 +1,17 @@
Change: Deleting files on Google Drive now moves them to the trash
When deleting files on Google Drive via the `rclone` backend, restic used to
bypass the trash folder required that one used the `-o rclone.args` option to
enable usage of the trash folder. This ensured that deleted files in Google
Drive were not kept indefinitely in the trash folder. However, since Google
Drive's trash retention policy changed to deleting trashed files after 30 days,
this is no longer needed.
Restic now leaves it up to rclone and its configuration to use or not use the
trash folder when deleting files. The default is to use the trash folder, as
of rclone 1.53.2. To re-enable the restic 0.11 behavior, set the
`RCLONE_DRIVE_USE_TRASH` environment variable or change the rclone
configuration. See the rclone documentation for more details.
https://github.com/restic/restic/issues/3095
https://github.com/restic/restic/pull/3102

View File

@@ -0,0 +1,10 @@
Bugfix: Do not require gs bucket permissions when running `init`
Restic used to require bucket level permissions for the `gs` backend
in order to initialize a restic repository.
It now allows a `gs` service account to initialize a repository if the
bucket does exist and the service account has permissions to write/read
to that bucket.
https://github.com/restic/restic/issues/3100

View File

@@ -0,0 +1,9 @@
Bugfix: Correctly detect output redirection for `backup` command on Windows
On Windows, since restic 0.10.0 the `backup` command did not properly detect
when the output was redirected to a file. This caused restic to output
terminal control characters. This has been fixed by correcting the terminal
detection.
https://github.com/restic/restic/issues/3111
https://github.com/restic/restic/pull/3150

View File

@@ -0,0 +1,11 @@
Enhancement: Support additional environment variables for Swift authentication
The `swift` backend now supports the following additional environment variables
for passing authentication details to restic:
`OS_USER_ID`, `OS_USER_DOMAIN_ID`, `OS_PROJECT_DOMAIN_ID` and `OS_TRUST_ID`
Depending on the `openrc` configuration file these might be required when the
user and project domains differ from one another.
https://github.com/restic/restic/issues/3147
https://github.com/restic/restic/pull/3158

View File

@@ -0,0 +1,9 @@
Bugfix: Don't create invalid snapshots when `backup` is interrupted
When canceling a backup run at a certain moment it was possible that
restic created a snapshot with an invalid "null" tree. This caused
`check` and other operations to fail. The `backup` command now properly
handles interruptions and never saves a snapshot when interrupted.
https://github.com/restic/restic/issues/3151
https://github.com/restic/restic/pull/3164

View File

@@ -0,0 +1,9 @@
Bugfix: Improve error handling in the `restore` command
The `restore` command used to not print errors while downloading file contents
from the repository. It also incorrectly exited with a zero error code even
when there were errors during the restore process. This has all been fixed and
`restore` now returns with a non-zero exit code when there's an error.
https://github.com/restic/restic/issues/3166
https://github.com/restic/restic/pull/3207

View File

@@ -0,0 +1,8 @@
Enhancement: Add release binaries for MIPS architectures
We've added a few new architectures for Linux to the release binaries: `mips`,
`mipsle`, `mips64`, and `mip64le`. MIPS is mostly used for low-end embedded
systems.
https://github.com/restic/restic/issues/3191
https://github.com/restic/restic/pull/3208

View File

@@ -0,0 +1,11 @@
Bugfix: Correct statistics for overlapping targets
A user reported that restic's statistics and progress information during backup
was not correctly calculated when the backup targets (files/dirs to save)
overlap. For example, consider a directory `foo` which contains (among others)
a file `foo/bar`. When `restic backup foo foo/bar` was run, restic counted the
size of the file `foo/bar` twice, so the completeness percentage as well as the
number of files was wrong. This is now corrected.
https://github.com/restic/restic/issues/3232
https://github.com/restic/restic/pull/3243

View File

@@ -0,0 +1,12 @@
Enhancement: Back up mountpoints as empty directories
When the `--one-file-system` option is specified to `restic backup`, it
ignores all file systems mounted below one of the target directories. This
means that when a snapshot is restored, users needed to manually recreate
the mountpoint directories.
Restic now backs up mountpoints as empty directories and therefore implements
the same approach as `tar`.
https://github.com/restic/restic/issues/909
https://github.com/restic/restic/pull/3119

View File

@@ -0,0 +1,6 @@
Enhancement: Add several more error checks
We've added a lot more error checks in places where errors were previously
ignored (as hinted by the static analysis program `errcheck` via `golangci-lint`).
https://github.com/restic/restic/pull/3250

View File

@@ -0,0 +1,29 @@
Enhancement: Improve `prune` performance and make it more customizable
The `prune` command is now much faster. This is especially the case for remote
repositories or repositories with not much data to remove. Also the memory
usage of the `prune` command is now reduced.
Restic used to rebuild the index from scratch after pruning. This could lead
to missing packs in the index in some cases for eventually consistent backends
such as e.g. AWS S3. This behavior is now changed and the index rebuilding
uses the information already known by `prune`.
By default, the `prune` command no longer removes all unused data. This
behavior can be fine-tuned by new options, like the acceptable amount of
unused space or the maximum size of data to reorganize. For more details,
please see https://restic.readthedocs.io/en/stable/060_forget.html .
Moreover, `prune` now accepts the `--dry-run` option and also running
`forget --dry-run --prune` will show what `prune` would do.
This enhancement also fixes several open issues, e.g.:
- https://github.com/restic/restic/issues/1140
- https://github.com/restic/restic/issues/1599
- https://github.com/restic/restic/issues/1985
- https://github.com/restic/restic/issues/2112
- https://github.com/restic/restic/issues/2227
- https://github.com/restic/restic/issues/2305
https://github.com/restic/restic/pull/2718
https://github.com/restic/restic/pull/2842

View File

@@ -0,0 +1,27 @@
Enhancement: Add option to let `backup` trust mtime without checking ctime
The `backup` command used to require that both `ctime` and `mtime` of a file
matched with a previously backed up version to determine that the file was
unchanged. In other words, if either `ctime` or `mtime` of the file had
changed, it would be considered changed and restic would read the file's
content again to back up the relevant (changed) parts of it.
The new option `--ignore-ctime` makes restic look at `mtime` only, such that
`ctime` changes for a file does not cause restic to read the file's contents
again.
The check for both `ctime` and `mtime` was introduced in restic 0.9.6 to make
backups more reliable in the face of programs that reset `mtime` (some Unix
archivers do that), but it turned out to often be expensive because it made
restic read file contents even if only the metadata (owner, permissions) of
a file had changed. The new `--ignore-ctime` option lets the user restore the
0.9.5 behavior when needed. The existing `--ignore-inode` option already
turned off this behavior, but also removed a different check.
Please note that changes in files' metadata are still recorded, regardless of
the command line options provided to the backup command.
https://github.com/restic/restic/issues/2495
https://github.com/restic/restic/issues/2558
https://github.com/restic/restic/issues/2819
https://github.com/restic/restic/pull/2823

View File

@@ -0,0 +1,8 @@
Enhancement: Speed up the repacking step of the `prune` command
The repack step of the `prune` command, which moves still used file parts into
new pack files such that the old ones can be garbage collected later on, now
processes multiple pack files in parallel. This is especially beneficial for
high latency backends or when using a fast network connection.
https://github.com/restic/restic/pull/2941

View File

@@ -0,0 +1,11 @@
Enhancement: Speed up the `rebuild-index` command
We've optimized the `rebuild-index` command. Now, existing index entries are used
to minimize the number of pack files that must be read. This speeds up the index
rebuild a lot.
Additionally, the option `--read-all-packs` has been added, implementing the
previous behavior.
https://github.com/restic/restic/issue/2547
https://github.com/restic/restic/pull/3006

View File

@@ -0,0 +1,13 @@
Bugfix: Fix sporadic stream reset between rclone and restic
Sometimes when using restic with the `rclone` backend, an error message
similar to the following would be printed:
Didn't finish writing GET request (wrote 0/xxx): http2: stream closed
It was found that this was caused by restic closing the connection to rclone
to soon when downloading data. A workaround has been added which waits for
the end of the download before closing the connection.
https://github.com/restic/restic/pull/3014
https://github.com/rclone/rclone/issues/2598

View File

@@ -0,0 +1,23 @@
Enhancement: Add more checks for index and pack files in the `check` command
The `check` command run with the `--read-data` or `--read-data-subset` options
used to only verify only the pack file content - it did not check if the blobs
within the pack are correctly contained in the index.
A check for the latter is now in place, which can print the following error:
Blob ID is not contained in index or position is incorrect
Another test is also added, which compares pack file sizes computed from the
index and the pack header with the actual file size. This test is able to
detect truncated pack files.
If the index is not correct, it can be rebuilt by using the `rebuild-index`
command.
Having added these tests, `restic check` is now able to detect non-existing
blobs which are wrongly referenced in the index. This situation could have
lead to missing data.
https://github.com/restic/restic/pull/3048
https://github.com/restic/restic/pull/3082

View File

@@ -0,0 +1,8 @@
Enhancement: Make the `dump` command support `zip` format
Previously, restic could dump the contents of a whole folder structure only
in the `tar` format. The `dump` command now has a new flag to change output
format to `zip`. Just pass `--archive zip` as an option to `restic dump`.
https://github.com/restic/restic/pull/2433
https://github.com/restic/restic/pull/3081

View File

@@ -0,0 +1,6 @@
Enhancement: Reduce memory usage of `check` command
The `check` command now requires less memory if it is run without the
`--check-unused` option.
https://github.com/restic/restic/pull/3099

View File

@@ -0,0 +1,10 @@
Enhancement: Parallelize scan of snapshot content in `copy` and `prune`
The `copy` and `prune` commands used to traverse the directories of
snapshots one by one to find used data. This snapshot traversal is
now parallized which can speed up this step several times.
In addition the `check` command now reports how many snapshots have
already been processed.
https://github.com/restic/restic/pull/3106

View File

@@ -0,0 +1,13 @@
Enhancement: Parallelize reading of locks and snapshots
Restic used to read snapshots sequentially. For repositories containing
many snapshots this slowed down commands which have to read all snapshots.
Now the reading of snapshots is parallelized. This speeds up for example
`prune`, `backup` and other commands that search for snapshots with certain
properties or which have to find the `latest` snapshot.
The speed up also applies to locks stored in the backup repository.
https://github.com/restic/restic/pull/3130
https://github.com/restic/restic/pull/3174

View File

@@ -0,0 +1,8 @@
Bugfix: Do not hang until foregrounded when completed in background
On Linux, when running in the background restic failed to stop the terminal
output of the `backup` command after it had completed. This caused restic to
hang until moved to the foreground. This has now been fixed.
https://github.com/restic/restic/pull/3152
https://forum.restic.net/t/restic-alpine-container-cron-hangs-epoll-pwait/3334

View File

@@ -0,0 +1,7 @@
Bugfix: Improve error handling in `gs` backend
The `gs` backend did not notice when the last step of completing a
file upload failed. Under rare circumstances, this could cause
missing files in the backup repository. This has now been fixed.
https://github.com/restic/restic/pull/3249

View File

@@ -0,0 +1,8 @@
Enhancement: Enable HTTP/2 for backend connections
Go's HTTP library usually automatically chooses between HTTP/1.x and HTTP/2
depending on what the server supports. But for compatibility this mechanism
is disabled if DialContext is used (which is the case for restic). This change
allows restic's HTTP client to negotiate HTTP/2 if supported by the server.
https://github.com/restic/restic/pull/3254

View File

@@ -1,11 +0,0 @@
Bugfix: Restore timestamps and permissions on intermediate directories
When using the `--include` option of the restore command, restic restored
timestamps and permissions only on directories selected by the include pattern.
Intermediate directories, which are necessary to restore files located in sub-
directories, were created with default permissions. We've fixed the restore
command to restore timestamps and permissions for these directories as well.
https://github.com/restic/restic/issues/1212
https://github.com/restic/restic/issues/1402
https://github.com/restic/restic/pull/2906

View File

@@ -1,15 +0,0 @@
Bugfix: Mark repository files as read-only when using the local backend
Files stored in a local repository were marked as writeable on the
filesystem for non-Windows systems, which did not prevent accidental file
modifications outside of restic. In addition, the local backend did not work
with certain filesystems and network mounts which do not permit modifications
of file permissions.
restic now marks files stored in a local repository as read-only on the
filesystem on non-Windows systems. The error handling is improved to support
more filesystems.
https://github.com/restic/restic/issues/1756
https://github.com/restic/restic/issues/2157
https://github.com/restic/restic/pull/2989

View File

@@ -1,10 +0,0 @@
Bugfix: Hide password in REST backend repository URLs
When using a password in the REST backend repository URL,
the password could in some cases be included in the output
from restic, e.g. when initializing a repo or during an error.
The password is now replaced with "***" where applicable.
https://github.com/restic/restic/issues/2241
https://github.com/restic/restic/pull/2658

View File

@@ -1,12 +0,0 @@
Bugfix: Correctly dump directories into tar files
The dump command previously wrote directories in a tar file in a way which
can cause compatibility problems. This caused, for example, 7zip on Windows
to not open tar files containing directories. In addition it was not possible
to dump directories with extended attributes. These compatibility problems
are now corrected.
In addition, a tar file now includes the name of the owner and group of a file.
https://github.com/restic/restic/issues/2319
https://github.com/restic/restic/pull/3039

View File

@@ -1,9 +0,0 @@
Bugfix: Don't require `self-update --output` placeholder file
`restic self-update --output /path/to/new-restic` used to require that
new-restic was an existing file, to be overwritten. Now it's possible
to download an updated restic binary to a new path, without first
having to create a placeholder file.
https://github.com/restic/restic/issues/2491
https://github.com/restic/restic/pull/2937

View File

@@ -1,7 +0,0 @@
Bugfix: Fix rare cases of backup command hanging forever
We've fixed an issue with the backup progress reporting which could cause
restic to hang forever right before finishing a backup.
https://github.com/restic/restic/issues/2834
https://github.com/restic/restic/pull/2963

View File

@@ -1,6 +0,0 @@
Bugfix: Fix manpage formatting
The manpage formatting in restic v0.10.0 was garbled, which is fixed now.
https://github.com/restic/restic/issues/2938
https://github.com/restic/restic/pull/2977

View File

@@ -1,7 +0,0 @@
Bugfix: Make --exclude-larger-than handle disappearing files
There was a small bug in the backup command's --exclude-larger-than
option where files that disappeared between scanning and actually
backing them up to the repository caused a panic. This is now fixed.
https://github.com/restic/restic/issues/2942

View File

@@ -1,9 +0,0 @@
Bugfix: restic generate, help and self-update no longer check passwords
The commands `restic cache`, `generate`, `help` and `self-update` don't need
passwords, but they previously did run the RESTIC_PASSWORD_COMMAND (if set in
the environment), prompting users to authenticate for no reason. They now skip
running the password command.
https://github.com/restic/restic/issues/2951
https://github.com/restic/restic/pull/2987

View File

@@ -1,9 +0,0 @@
Enhancement: Optimize check for unchanged files during backup
During a backup restic skips processing files which have not changed since the last backup run.
Previously this required opening each file once which can be slow on network filesystems. The
backup command now checks for file changes before opening a file. This considerably reduces
the time to create a backup on network filesystems.
https://github.com/restic/restic/issues/2969
https://github.com/restic/restic/pull/2970

View File

@@ -1,9 +0,0 @@
Bugfix: Make snapshots --json output [] instead of null when no snapshots
Restic previously output `null` instead of `[]` for the `--json snapshots`
command, when there were no snapshots in the repository. This caused some
minor problems when parsing the output, but is now fixed such that `[]` is
output when the list of snapshots is empty.
https://github.com/restic/restic/issues/2979
https://github.com/restic/restic/pull/2984

View File

@@ -1,12 +0,0 @@
Enhancement: Add support for Volume Shadow Copy Service (VSS) on Windows
Volume Shadow Copy Service allows read access to files that are locked by
another process using an exclusive lock through a filesystem snapshot. Restic
was unable to backup those files before. This update enables backing up these
files.
This needs to be enabled explicitely using the --use-fs-snapshot option of the
backup command.
https://github.com/restic/restic/issues/340
https://github.com/restic/restic/pull/2274

View File

@@ -1,7 +0,0 @@
Enhancement: Authenticate to Google Cloud Storage with access token
When using the GCS backend, it is now possible to authenticate with OAuth2
access tokens instead of a credentials file by setting the GOOGLE_ACCESS_TOKEN
environment variable.
https://github.com/restic/restic/pull/2849

View File

@@ -1,10 +0,0 @@
Enhancement: New option --repository-file
We've added a new command-line option --repository-file as an alternative
to -r. This allows to read the repository URL from a file in order to
prevent certain types of information leaks, especially for URLs containing
credentials.
https://github.com/restic/restic/issues/1458
https://github.com/restic/restic/issues/2900
https://github.com/restic/restic/pull/2910

View File

@@ -1,8 +0,0 @@
Enhancement: Warn if parent snapshot cannot be loaded during backup
During a backup restic uses the parent snapshot to check whether a file was
changed and has to be backed up again. For this check the backup has to read
the directories contained in the old snapshot. If a tree blob cannot be
loaded, restic now warns about this problem with the backup repository.
https://github.com/restic/restic/pull/2978

View File

@@ -11,7 +11,6 @@ import (
"path"
"path/filepath"
"runtime"
"strconv"
"strings"
"time"
@@ -56,14 +55,6 @@ Exit status is 3 if some source data could not be read (incomplete snapshot crea
},
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
if backupOptions.Stdin {
for _, filename := range backupOptions.FilesFrom {
if filename == "-" {
return errors.Fatal("cannot use both `--stdin` and `--files-from -`")
}
}
}
var t tomb.Tomb
term := termstatus.New(globalOptions.stdout, globalOptions.stderr, globalOptions.Quiet)
t.Go(func() error { term.Run(t.Context(globalOptions.ctx)); return nil })
@@ -91,12 +82,15 @@ type BackupOptions struct {
ExcludeLargerThan string
Stdin bool
StdinFilename string
Tags []string
Tags restic.TagLists
Host string
FilesFrom []string
FilesFromVerbatim []string
FilesFromRaw []string
TimeStamp string
WithAtime bool
IgnoreInode bool
IgnoreCtime bool
UseFsSnapshot bool
}
@@ -121,16 +115,23 @@ func init() {
f.StringVar(&backupOptions.ExcludeLargerThan, "exclude-larger-than", "", "max `size` of the files to be backed up (allowed suffixes: k/K, m/M, g/G, t/T)")
f.BoolVar(&backupOptions.Stdin, "stdin", false, "read backup from stdin")
f.StringVar(&backupOptions.StdinFilename, "stdin-filename", "stdin", "`filename` to use when reading from stdin")
f.StringArrayVar(&backupOptions.Tags, "tag", nil, "add a `tag` for the new snapshot (can be specified multiple times)")
f.Var(&backupOptions.Tags, "tag", "add `tags` for the new snapshot in the format `tag[,tag,...]` (can be specified multiple times)")
f.StringVarP(&backupOptions.Host, "host", "H", "", "set the `hostname` for the snapshot manually. To prevent an expensive rescan use the \"parent\" flag")
f.StringVar(&backupOptions.Host, "hostname", "", "set the `hostname` for the snapshot manually")
f.MarkDeprecated("hostname", "use --host")
err := f.MarkDeprecated("hostname", "use --host")
if err != nil {
// MarkDeprecated only returns an error when the flag could not be found
panic(err)
}
f.StringArrayVar(&backupOptions.FilesFrom, "files-from", nil, "read the files to backup from `file` (can be combined with file args/can be specified multiple times)")
f.StringArrayVar(&backupOptions.FilesFrom, "files-from", nil, "read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
f.StringArrayVar(&backupOptions.FilesFromVerbatim, "files-from-verbatim", nil, "read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
f.StringArrayVar(&backupOptions.FilesFromRaw, "files-from-raw", nil, "read the files to backup from `file` (can be combined with file args; can be specified multiple times)")
f.StringVar(&backupOptions.TimeStamp, "time", "", "`time` of the backup (ex. '2012-11-01 22:08:41') (default: now)")
f.BoolVar(&backupOptions.WithAtime, "with-atime", false, "store the atime for all files and directories")
f.BoolVar(&backupOptions.IgnoreInode, "ignore-inode", false, "ignore inode number changes when checking for modified files")
f.BoolVar(&backupOptions.IgnoreCtime, "ignore-ctime", false, "ignore ctime changes when checking for modified files")
if runtime.GOOS == "windows" {
f.BoolVar(&backupOptions.UseFsSnapshot, "use-fs-snapshot", false, "use filesystem snapshot where possible (currently only Windows VSS)")
}
@@ -156,11 +157,13 @@ func filterExisting(items []string) (result []string, err error) {
return
}
// readFromFile will read all lines from the given filename and return them as
// a string array, if filename is empty readFromFile returns and empty string
// array. If filename is a dash (-), readFromFile will read the lines from the
// readLines reads all lines from the named file and returns them as a
// string slice.
//
// If filename is empty, readPatternsFromFile returns an empty slice.
// If filename is a dash (-), readPatternsFromFile will read the lines from the
// standard input.
func readLinesFromFile(filename string) ([]string, error) {
func readLines(filename string) ([]string, error) {
if filename == "" {
return nil, nil
}
@@ -184,29 +187,72 @@ func readLinesFromFile(filename string) ([]string, error) {
scanner := bufio.NewScanner(bytes.NewReader(data))
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
// ignore empty lines
if line == "" {
continue
}
// strip comments
if strings.HasPrefix(line, "#") {
continue
}
lines = append(lines, line)
lines = append(lines, scanner.Text())
}
if err := scanner.Err(); err != nil {
return nil, err
}
return lines, nil
}
// readFilenamesFromFileRaw reads a list of filenames from the given file,
// or stdin if filename is "-". Each filename is terminated by a zero byte,
// which is stripped off.
func readFilenamesFromFileRaw(filename string) (names []string, err error) {
f := os.Stdin
if filename != "-" {
if f, err = os.Open(filename); err != nil {
return nil, err
}
}
names, err = readFilenamesRaw(f)
if err != nil {
// ignore subsequent errors
_ = f.Close()
return nil, err
}
err = f.Close()
if err != nil {
return nil, err
}
return names, nil
}
func readFilenamesRaw(r io.Reader) (names []string, err error) {
br := bufio.NewReader(r)
for {
name, err := br.ReadString(0)
switch err {
case nil:
case io.EOF:
if name == "" {
return names, nil
}
return nil, errors.Fatal("--files-from-raw: trailing zero byte missing")
default:
return nil, err
}
name = name[:len(name)-1]
if name == "" {
// The empty filename is never valid. Handle this now to
// prevent downstream code from erroneously backing up
// filepath.Clean("") == ".".
return nil, errors.Fatal("--files-from-raw: empty filename in listing")
}
names = append(names, name)
}
}
// Check returns an error when an invalid combination of options was set.
func (opts BackupOptions) Check(gopts GlobalOptions, args []string) error {
if gopts.password == "" {
for _, filename := range opts.FilesFrom {
filesFrom := append(append(opts.FilesFrom, opts.FilesFromVerbatim...), opts.FilesFromRaw...)
for _, filename := range filesFrom {
if filename == "-" {
return errors.Fatal("unable to read password from stdin when data is to be read from stdin, use --password-file or $RESTIC_PASSWORD")
}
@@ -217,6 +263,12 @@ func (opts BackupOptions) Check(gopts GlobalOptions, args []string) error {
if len(opts.FilesFrom) > 0 {
return errors.Fatal("--stdin and --files-from cannot be used together")
}
if len(opts.FilesFromVerbatim) > 0 {
return errors.Fatal("--stdin and --files-from-verbatim cannot be used together")
}
if len(opts.FilesFromRaw) > 0 {
return errors.Fatal("--stdin and --files-from-raw cannot be used together")
}
if len(args) > 0 {
return errors.Fatal("--stdin was specified and files/dirs were listed as arguments")
@@ -356,15 +408,19 @@ func collectTargets(opts BackupOptions, args []string) (targets []string, err er
return nil, nil
}
var lines []string
for _, file := range opts.FilesFrom {
fromfile, err := readLinesFromFile(file)
fromfile, err := readLines(file)
if err != nil {
return nil, err
}
// expand wildcards
for _, line := range fromfile {
line = strings.TrimSpace(line)
if line == "" || line[0] == '#' { // '#' marks a comment.
continue
}
var expanded []string
expanded, err := filepath.Glob(line)
if err != nil {
@@ -373,19 +429,38 @@ func collectTargets(opts BackupOptions, args []string) (targets []string, err er
if len(expanded) == 0 {
Warnf("pattern %q does not match any files, skipping\n", line)
}
lines = append(lines, expanded...)
targets = append(targets, expanded...)
}
}
// merge files from files-from into normal args so we can reuse the normal
// args checks and have the ability to use both files-from and args at the
// same time
args = append(args, lines...)
if len(args) == 0 && !opts.Stdin {
for _, file := range opts.FilesFromVerbatim {
fromfile, err := readLines(file)
if err != nil {
return nil, err
}
for _, line := range fromfile {
if line == "" {
continue
}
targets = append(targets, line)
}
}
for _, file := range opts.FilesFromRaw {
fromfile, err := readFilenamesFromFileRaw(file)
if err != nil {
return nil, err
}
targets = append(targets, fromfile...)
}
// Merge args into files-from so we can reuse the normal args checks
// and have the ability to use both files-from and args at the same time.
targets = append(targets, args...)
if len(targets) == 0 && !opts.Stdin {
return nil, errors.Fatal("nothing to backup, please specify target files/dirs")
}
targets = args
targets, err = filterExisting(targets)
if err != nil {
return nil, err
@@ -486,15 +561,7 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
}()
gopts.stdout, gopts.stderr = p.Stdout(), p.Stderr()
if s, ok := os.LookupEnv("RESTIC_PROGRESS_FPS"); ok {
fps, err := strconv.Atoi(s)
if err == nil && fps >= 1 {
if fps > 60 {
fps = 60
}
p.SetMinUpdatePause(time.Second / time.Duration(fps))
}
}
p.SetMinUpdatePause(calculateProgressInterval())
t.Go(func() error { return p.Run(t.Context(gopts.ctx)) })
@@ -532,8 +599,12 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
return err
}
if !gopts.JSON && parentSnapshotID != nil {
p.V("using parent snapshot %v\n", parentSnapshotID.Str())
if !gopts.JSON {
if parentSnapshotID != nil {
p.P("using parent snapshot %v\n", parentSnapshotID.Str())
} else {
p.P("no parent snapshot found, will read all files\n")
}
}
selectByNameFilter := func(item string) bool {
@@ -611,7 +682,15 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
arch.CompleteItem = p.CompleteItem
arch.StartFile = p.StartFile
arch.CompleteBlob = p.CompleteBlob
arch.IgnoreInode = opts.IgnoreInode
if opts.IgnoreInode {
// --ignore-inode implies --ignore-ctime: on FUSE, the ctime is not
// reliable either.
arch.ChangeIgnoreFlags |= archiver.ChangeIgnoreCtime | archiver.ChangeIgnoreInode
}
if opts.IgnoreCtime {
arch.ChangeIgnoreFlags |= archiver.ChangeIgnoreCtime
}
if parentSnapshotID == nil {
parentSnapshotID = &restic.ID{}
@@ -619,7 +698,7 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
snapshotOpts := archiver.SnapshotOptions{
Excludes: opts.Excludes,
Tags: opts.Tags,
Tags: opts.Tags.Flatten(),
Time: timeStamp,
Hostname: opts.Host,
ParentSnapshot: *parentSnapshotID,

View File

@@ -0,0 +1,113 @@
package main
import (
"bytes"
"fmt"
"os"
"path/filepath"
"runtime"
"sort"
"strings"
"testing"
rtest "github.com/restic/restic/internal/test"
)
func TestCollectTargets(t *testing.T) {
dir, cleanup := rtest.TempDir(t)
defer cleanup()
fooSpace := "foo "
barStar := "bar*" // Must sort before the others, below.
if runtime.GOOS == "windows" { // Doesn't allow "*" or trailing space.
fooSpace = "foo"
barStar = "bar"
}
var expect []string
for _, filename := range []string{
barStar, "baz", "cmdline arg", fooSpace,
"fromfile", "fromfile-raw", "fromfile-verbatim", "quux",
} {
// All mentioned files must exist for collectTargets.
f, err := os.Create(filepath.Join(dir, filename))
rtest.OK(t, err)
rtest.OK(t, f.Close())
expect = append(expect, f.Name())
}
f1, err := os.Create(filepath.Join(dir, "fromfile"))
rtest.OK(t, err)
// Empty lines should be ignored. A line starting with '#' is a comment.
fmt.Fprintf(f1, "\n%s*\n # here's a comment\n", f1.Name())
rtest.OK(t, f1.Close())
f2, err := os.Create(filepath.Join(dir, "fromfile-verbatim"))
rtest.OK(t, err)
for _, filename := range []string{fooSpace, barStar} {
// Empty lines should be ignored. CR+LF is allowed.
fmt.Fprintf(f2, "%s\r\n\n", filepath.Join(dir, filename))
}
rtest.OK(t, f2.Close())
f3, err := os.Create(filepath.Join(dir, "fromfile-raw"))
rtest.OK(t, err)
for _, filename := range []string{"baz", "quux"} {
fmt.Fprintf(f3, "%s\x00", filepath.Join(dir, filename))
}
rtest.OK(t, err)
rtest.OK(t, f3.Close())
opts := BackupOptions{
FilesFrom: []string{f1.Name()},
FilesFromVerbatim: []string{f2.Name()},
FilesFromRaw: []string{f3.Name()},
}
targets, err := collectTargets(opts, []string{filepath.Join(dir, "cmdline arg")})
rtest.OK(t, err)
sort.Strings(targets)
rtest.Equals(t, expect, targets)
}
func TestReadFilenamesRaw(t *testing.T) {
// These should all be returned exactly as-is.
expected := []string{
"\xef\xbb\xbf/utf-8-bom",
"/absolute",
"../.././relative",
"\t\t leading and trailing space \t\t",
"newline\nin filename",
"not UTF-8: \x80\xff/simple",
` / *[]* \ `,
}
var buf bytes.Buffer
for _, name := range expected {
buf.WriteString(name)
buf.WriteByte(0)
}
got, err := readFilenamesRaw(&buf)
rtest.OK(t, err)
rtest.Equals(t, expected, got)
// Empty input is ok.
got, err = readFilenamesRaw(strings.NewReader(""))
rtest.OK(t, err)
rtest.Equals(t, 0, len(got))
// An empty filename is an error.
_, err = readFilenamesRaw(strings.NewReader("foo\x00\x00"))
rtest.Assert(t, err != nil, "no error for zero byte")
rtest.Assert(t, strings.Contains(err.Error(), "empty filename"),
"wrong error message: %v", err.Error())
// No trailing NUL byte is an error, because it likely means we're
// reading a line-oriented text file (someone forgot -print0).
_, err = readFilenamesRaw(strings.NewReader("simple.txt"))
rtest.Assert(t, err != nil, "no error for zero byte")
rtest.Assert(t, strings.Contains(err.Error(), "zero byte"),
"wrong error message: %v", err.Error())
}

View File

@@ -148,7 +148,7 @@ func runCache(opts CacheOptions, gopts GlobalOptions, args []string) error {
})
}
tab.Write(gopts.stdout)
_ = tab.Write(gopts.stdout)
Printf("%d cache dirs in %s\n", len(dirs), cachedir)
return nil

View File

@@ -42,10 +42,13 @@ func runCat(gopts GlobalOptions, args []string) error {
return err
}
lock, err := lockRepo(gopts.ctx, repo)
defer unlockRepo(lock)
if err != nil {
return err
if !gopts.NoLock {
lock, err := lockRepo(gopts.ctx, repo)
if err != nil {
return err
}
defer unlockRepo(lock)
}
tpe := args[0]
@@ -165,7 +168,8 @@ func runCat(gopts GlobalOptions, args []string) error {
case "blob":
for _, t := range []restic.BlobType{restic.DataBlob, restic.TreeBlob} {
if !repo.Index().Has(id, t) {
bh := restic.BlobHandle{ID: id, Type: t}
if !repo.Index().Has(bh) {
continue
}

View File

@@ -1,10 +1,11 @@
package main
import (
"fmt"
"io/ioutil"
"math/rand"
"strconv"
"strings"
"time"
"github.com/spf13/cobra"
@@ -53,25 +54,38 @@ func init() {
f := cmdCheck.Flags()
f.BoolVar(&checkOptions.ReadData, "read-data", false, "read all data blobs")
f.StringVar(&checkOptions.ReadDataSubset, "read-data-subset", "", "read subset n of m data packs (format: `n/m`)")
f.StringVar(&checkOptions.ReadDataSubset, "read-data-subset", "", "read a `subset` of data packs, specified as 'n/t' for specific subset or either 'x%' or 'x.y%' for random subset")
f.BoolVar(&checkOptions.CheckUnused, "check-unused", false, "find unused blobs")
f.BoolVar(&checkOptions.WithCache, "with-cache", false, "use the cache")
}
func checkFlags(opts CheckOptions) error {
if opts.ReadData && opts.ReadDataSubset != "" {
return errors.Fatalf("check flags --read-data and --read-data-subset cannot be used together")
return errors.Fatal("check flags --read-data and --read-data-subset cannot be used together")
}
if opts.ReadDataSubset != "" {
dataSubset, err := stringToIntSlice(opts.ReadDataSubset)
if err != nil || len(dataSubset) != 2 {
return errors.Fatalf("check flag --read-data-subset must have two positive integer values, e.g. --read-data-subset=1/2")
}
if dataSubset[0] == 0 || dataSubset[1] == 0 || dataSubset[0] > dataSubset[1] {
return errors.Fatalf("check flag --read-data-subset=n/t values must be positive integers, and n <= t, e.g. --read-data-subset=1/2")
}
if dataSubset[1] > totalBucketsMax {
return errors.Fatalf("check flag --read-data-subset=n/t t must be at most %d", totalBucketsMax)
argumentError := errors.Fatal("check flag --read-data-subset must have two positive integer values or a percentage, e.g. --read-data-subset=1/2 or --read-data-subset=2.5%%")
if err == nil {
if len(dataSubset) != 2 {
return argumentError
}
if dataSubset[0] == 0 || dataSubset[1] == 0 || dataSubset[0] > dataSubset[1] {
return errors.Fatal("check flag --read-data-subset=n/t values must be positive integers, and n <= t, e.g. --read-data-subset=1/2")
}
if dataSubset[1] > totalBucketsMax {
return errors.Fatalf("check flag --read-data-subset=n/t t must be at most %d", totalBucketsMax)
}
} else {
percentage, err := parsePercentage(opts.ReadDataSubset)
if err != nil {
return argumentError
}
if percentage <= 0.0 || percentage > 100.0 {
return errors.Fatal(
"check flag --read-data-subset=n% n must be above 0.0% and at most 100.0%")
}
}
}
@@ -98,6 +112,21 @@ func stringToIntSlice(param string) (split []uint, err error) {
return result, nil
}
// ParsePercentage parses a percentage string of the form "X%" where X is a float constant,
// and returns the value of that constant. It does not check the range of the value.
func parsePercentage(s string) (float64, error) {
if !strings.HasSuffix(s, "%") {
return 0, errors.Errorf(`parsePercentage: %q does not end in "%%"`, s)
}
s = s[:len(s)-1]
p, err := strconv.ParseFloat(s, 64)
if err != nil {
return 0, errors.Errorf("parsePercentage: %v", err)
}
return p, nil
}
// prepareCheckCache configures a special cache directory for check.
//
// * if --with-cache is specified, the default cache is used
@@ -165,7 +194,7 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
}
}
chkr := checker.New(repo)
chkr := checker.New(repo, opts.CheckUnused)
Verbosef("load indexes\n")
hints, errs := chkr.LoadIndex(gopts.ctx)
@@ -212,7 +241,11 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
Verbosef("check snapshots, trees and blobs\n")
errChan = make(chan error)
go chkr.Structure(gopts.ctx, errChan)
go func() {
bar := newProgressMax(!gopts.Quiet, 0, "snapshots")
defer bar.Done()
chkr.Structure(gopts.ctx, bar, errChan)
}()
for err := range errChan {
errorsFound = true
@@ -227,29 +260,15 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
}
if opts.CheckUnused {
for _, id := range chkr.UnusedBlobs() {
for _, id := range chkr.UnusedBlobs(gopts.ctx) {
Verbosef("unused blob %v\n", id)
errorsFound = true
}
}
doReadData := func(bucket, totalBuckets uint) {
packs := restic.IDSet{}
for pack := range chkr.GetPacks() {
// If we ever check more than the first byte
// of pack, update totalBucketsMax.
if (uint(pack[0]) % totalBuckets) == (bucket - 1) {
packs.Insert(pack)
}
}
doReadData := func(packs map[restic.ID]int64) {
packCount := uint64(len(packs))
if packCount < chkr.CountPacks() {
Verbosef(fmt.Sprintf("read group #%d of %d data packs (out of total %d packs in %d groups)\n", bucket, packCount, chkr.CountPacks(), totalBuckets))
} else {
Verbosef("read all data\n")
}
p := newProgressMax(!gopts.Quiet, packCount, "packs")
errChan := make(chan error)
@@ -259,14 +278,31 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
errorsFound = true
Warnf("%v\n", err)
}
p.Done()
}
switch {
case opts.ReadData:
doReadData(1, 1)
Verbosef("read all data\n")
doReadData(selectPacksByBucket(chkr.GetPacks(), 1, 1))
case opts.ReadDataSubset != "":
dataSubset, _ := stringToIntSlice(opts.ReadDataSubset)
doReadData(dataSubset[0], dataSubset[1])
var packs map[restic.ID]int64
dataSubset, err := stringToIntSlice(opts.ReadDataSubset)
if err == nil {
bucket := dataSubset[0]
totalBuckets := dataSubset[1]
packs = selectPacksByBucket(chkr.GetPacks(), bucket, totalBuckets)
packCount := uint64(len(packs))
Verbosef("read group #%d of %d data packs (out of total %d packs in %d groups)\n", bucket, packCount, chkr.CountPacks(), totalBuckets)
} else {
percentage, _ := parsePercentage(opts.ReadDataSubset)
packs = selectRandomPacksByPercentage(chkr.GetPacks(), percentage)
Verbosef("read %.1f%% of data packs\n", percentage)
}
if packs == nil {
return errors.Fatal("internal error: failed to select packs to check")
}
doReadData(packs)
}
if errorsFound {
@@ -277,3 +313,42 @@ func runCheck(opts CheckOptions, gopts GlobalOptions, args []string) error {
return nil
}
// selectPacksByBucket selects subsets of packs by ranges of buckets.
func selectPacksByBucket(allPacks map[restic.ID]int64, bucket, totalBuckets uint) map[restic.ID]int64 {
packs := make(map[restic.ID]int64)
for pack, size := range allPacks {
// If we ever check more than the first byte
// of pack, update totalBucketsMax.
if (uint(pack[0]) % totalBuckets) == (bucket - 1) {
packs[pack] = size
}
}
return packs
}
// selectRandomPacksByPercentage selects the given percentage of packs which are randomly choosen.
func selectRandomPacksByPercentage(allPacks map[restic.ID]int64, percentage float64) map[restic.ID]int64 {
packCount := len(allPacks)
packsToCheck := int(float64(packCount) * (percentage / 100.0))
if packsToCheck < 1 {
packsToCheck = 1
}
timeNs := time.Now().UnixNano()
r := rand.New(rand.NewSource(timeNs))
idx := r.Perm(packCount)
var keys []restic.ID
for k := range allPacks {
keys = append(keys, k)
}
packs := make(map[restic.ID]int64)
for i := 0; i < packsToCheck; i++ {
id := keys[idx[i]]
packs[id] = allPacks[id]
}
return packs
}

View File

@@ -0,0 +1,124 @@
package main
import (
"math"
"reflect"
"testing"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
)
func TestParsePercentage(t *testing.T) {
testCases := []struct {
input string
output float64
expectError bool
}{
{"0%", 0.0, false},
{"1%", 1.0, false},
{"100%", 100.0, false},
{"123%", 123.0, false},
{"123.456%", 123.456, false},
{"0.742%", 0.742, false},
{"-100%", -100.0, false},
{" 1%", 0.0, true},
{"1 %", 0.0, true},
{"1% ", 0.0, true},
}
for _, testCase := range testCases {
output, err := parsePercentage(testCase.input)
if testCase.expectError {
rtest.Assert(t, err != nil, "Expected error for case %s", testCase.input)
rtest.Assert(t, output == 0.0, "Expected output to be 0.0, got %s", output)
} else {
rtest.Assert(t, err == nil, "Expected no error for case %s", testCase.input)
rtest.Assert(t, math.Abs(testCase.output-output) < 0.00001, "Expected %f, got %f",
testCase.output, output)
}
}
}
func TestStringToIntSlice(t *testing.T) {
testCases := []struct {
input string
output []uint
expectError bool
}{
{"3/5", []uint{3, 5}, false},
{"1/100", []uint{1, 100}, false},
{"abc", nil, true},
{"1/a", nil, true},
{"/", nil, true},
}
for _, testCase := range testCases {
output, err := stringToIntSlice(testCase.input)
if testCase.expectError {
rtest.Assert(t, err != nil, "Expected error for case %s", testCase.input)
rtest.Assert(t, output == nil, "Expected output to be nil, got %s", output)
} else {
rtest.Assert(t, err == nil, "Expected no error for case %s", testCase.input)
rtest.Assert(t, len(output) == 2, "Invalid output length for case %s", testCase.input)
rtest.Assert(t, reflect.DeepEqual(output, testCase.output), "Expected %f, got %f",
testCase.output, output)
}
}
}
func TestSelectPacksByBucket(t *testing.T) {
var testPacks = make(map[restic.ID]int64)
for i := 1; i <= 10; i++ {
id := restic.NewRandomID()
// ensure relevant part of generated id is reproducable
id[0] = byte(i)
testPacks[id] = 0
}
selectedPacks := selectPacksByBucket(testPacks, 0, 10)
rtest.Assert(t, len(selectedPacks) == 0, "Expected 0 selected packs")
for i := uint(1); i <= 5; i++ {
selectedPacks = selectPacksByBucket(testPacks, i, 5)
rtest.Assert(t, len(selectedPacks) == 2, "Expected 2 selected packs")
}
selectedPacks = selectPacksByBucket(testPacks, 1, 1)
rtest.Assert(t, len(selectedPacks) == 10, "Expected 10 selected packs")
for testPack := range testPacks {
_, ok := selectedPacks[testPack]
rtest.Assert(t, ok, "Expected input and output to be equal")
}
}
func TestSelectRandomPacksByPercentage(t *testing.T) {
var testPacks = make(map[restic.ID]int64)
for i := 1; i <= 10; i++ {
testPacks[restic.NewRandomID()] = 0
}
selectedPacks := selectRandomPacksByPercentage(testPacks, 0.0)
rtest.Assert(t, len(selectedPacks) == 1, "Expected 1 selected packs")
selectedPacks = selectRandomPacksByPercentage(testPacks, 10.0)
rtest.Assert(t, len(selectedPacks) == 1, "Expected 1 selected pack")
for pack := range selectedPacks {
_, ok := testPacks[pack]
rtest.Assert(t, ok, "Unexpected selection")
}
selectedPacks = selectRandomPacksByPercentage(testPacks, 50.0)
rtest.Assert(t, len(selectedPacks) == 5, "Expected 5 selected packs")
for pack := range selectedPacks {
_, ok := testPacks[pack]
rtest.Assert(t, ok, "Unexpected item in selection")
}
selectedPacks = selectRandomPacksByPercentage(testPacks, 100.0)
rtest.Assert(t, len(selectedPacks) == 10, "Expected 10 selected packs")
for testPack := range testPacks {
_, ok := selectedPacks[testPack]
rtest.Assert(t, ok, "Expected input and output to be equal")
}
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/restic"
"golang.org/x/sync/errgroup"
"github.com/spf13/cobra"
)
@@ -14,12 +15,19 @@ var cmdCopy = &cobra.Command{
Use: "copy [flags] [snapshotID ...]",
Short: "Copy snapshots from one repository to another",
Long: `
The "copy" command copies one or more snapshots from one repository to another
repository. Note that this will have to read (download) and write (upload) the
entire snapshot(s) due to the different encryption keys on the source and
destination, and that transferred files are not re-chunked, which may break
their deduplication. This can be mitigated by the "--copy-chunker-params"
option when initializing a new destination repository using the "init" command.
The "copy" command copies one or more snapshots from one repository to another.
NOTE: This process will have to both download (read) and upload (write) the
entire snapshot(s) due to the different encryption keys used in the source and
destination repositories. This /may incur higher bandwidth usage and costs/ than
expected during normal backup runs.
NOTE: The copying process does not re-chunk files, which may break deduplication
between the files copied and files already stored in the destination repository.
This means that copied files, which existed in both the source and destination
repository, /may occupy up to twice their space/ in the destination repository.
This can be mitigated by the "--copy-chunker-params" option when initializing a
new destination repository using the "init" command.
`,
RunE: func(cmd *cobra.Command, args []string) error {
return runCopy(copyOptions, globalOptions, args)
@@ -96,12 +104,8 @@ func runCopy(opts CopyOptions, gopts GlobalOptions, args []string) error {
dstSnapshotByOriginal[*sn.ID()] = append(dstSnapshotByOriginal[*sn.ID()], sn)
}
cloner := &treeCloner{
srcRepo: srcRepo,
dstRepo: dstRepo,
visitedTrees: restic.NewIDSet(),
buf: nil,
}
// remember already processed trees across all snapshots
visitedTrees := restic.NewIDSet()
for sn := range FindFilteredSnapshots(ctx, srcRepo, opts.Hosts, opts.Tags, opts.Paths, args) {
Verbosef("\nsnapshot %s of %v at %s)\n", sn.ID().Str(), sn.Paths, sn.Time)
@@ -126,7 +130,7 @@ func runCopy(opts CopyOptions, gopts GlobalOptions, args []string) error {
}
Verbosef(" copy started, this may take a while...\n")
if err := cloner.copyTree(ctx, *sn.Tree); err != nil {
if err := copyTree(ctx, srcRepo, dstRepo, visitedTrees, *sn.Tree); err != nil {
return err
}
debug.Log("tree copied")
@@ -170,64 +174,64 @@ func similarSnapshots(sna *restic.Snapshot, snb *restic.Snapshot) bool {
return true
}
type treeCloner struct {
srcRepo restic.Repository
dstRepo restic.Repository
visitedTrees restic.IDSet
buf []byte
}
func copyTree(ctx context.Context, srcRepo restic.Repository, dstRepo restic.Repository,
visitedTrees restic.IDSet, rootTreeID restic.ID) error {
func (t *treeCloner) copyTree(ctx context.Context, treeID restic.ID) error {
// We have already processed this tree
if t.visitedTrees.Has(treeID) {
wg, ctx := errgroup.WithContext(ctx)
treeStream := restic.StreamTrees(ctx, wg, srcRepo, restic.IDs{rootTreeID}, func(treeID restic.ID) bool {
visited := visitedTrees.Has(treeID)
visitedTrees.Insert(treeID)
return visited
}, nil)
wg.Go(func() error {
// reused buffer
var buf []byte
for tree := range treeStream {
if tree.Error != nil {
return fmt.Errorf("LoadTree(%v) returned error %v", tree.ID.Str(), tree.Error)
}
// Do we already have this tree blob?
if !dstRepo.Index().Has(restic.BlobHandle{ID: tree.ID, Type: restic.TreeBlob}) {
newTreeID, err := dstRepo.SaveTree(ctx, tree.Tree)
if err != nil {
return fmt.Errorf("SaveTree(%v) returned error %v", tree.ID.Str(), err)
}
// Assurance only.
if newTreeID != tree.ID {
return fmt.Errorf("SaveTree(%v) returned unexpected id %s", tree.ID.Str(), newTreeID.Str())
}
}
// TODO: parallelize blob down/upload
for _, entry := range tree.Nodes {
// Recursion into directories is handled by StreamTrees
// Copy the blobs for this file.
for _, blobID := range entry.Content {
// Do we already have this data blob?
if dstRepo.Index().Has(restic.BlobHandle{ID: blobID, Type: restic.DataBlob}) {
continue
}
debug.Log("Copying blob %s\n", blobID.Str())
var err error
buf, err = srcRepo.LoadBlob(ctx, restic.DataBlob, blobID, buf)
if err != nil {
return fmt.Errorf("LoadBlob(%v) returned error %v", blobID, err)
}
_, _, err = dstRepo.SaveBlob(ctx, restic.DataBlob, buf, blobID, false)
if err != nil {
return fmt.Errorf("SaveBlob(%v) returned error %v", blobID, err)
}
}
}
}
return nil
}
tree, err := t.srcRepo.LoadTree(ctx, treeID)
if err != nil {
return fmt.Errorf("LoadTree(%v) returned error %v", treeID.Str(), err)
}
t.visitedTrees.Insert(treeID)
// Do we already have this tree blob?
if !t.dstRepo.Index().Has(treeID, restic.TreeBlob) {
newTreeID, err := t.dstRepo.SaveTree(ctx, tree)
if err != nil {
return fmt.Errorf("SaveTree(%v) returned error %v", treeID.Str(), err)
}
// Assurance only.
if newTreeID != treeID {
return fmt.Errorf("SaveTree(%v) returned unexpected id %s", treeID.Str(), newTreeID.Str())
}
}
// TODO: parellize this stuff, likely only needed inside a tree.
for _, entry := range tree.Nodes {
// If it is a directory, recurse
if entry.Type == "dir" && entry.Subtree != nil {
if err := t.copyTree(ctx, *entry.Subtree); err != nil {
return err
}
}
// Copy the blobs for this file.
for _, blobID := range entry.Content {
// Do we already have this data blob?
if t.dstRepo.Index().Has(blobID, restic.DataBlob) {
continue
}
debug.Log("Copying blob %s\n", blobID.Str())
t.buf, err = t.srcRepo.LoadBlob(ctx, restic.DataBlob, blobID, t.buf)
if err != nil {
return fmt.Errorf("LoadBlob(%v) returned error %v", blobID, err)
}
_, _, err = t.dstRepo.SaveBlob(ctx, restic.DataBlob, t.buf, blobID, false)
if err != nil {
return fmt.Errorf("SaveBlob(%v) returned error %v", blobID, err)
}
}
}
return nil
})
return wg.Wait()
}

View File

@@ -55,8 +55,7 @@ func prettyPrintJSON(wr io.Writer, item interface{}) error {
}
func debugPrintSnapshots(ctx context.Context, repo *repository.Repository, wr io.Writer) error {
return repo.List(ctx, restic.SnapshotFile, func(id restic.ID, size int64) error {
snapshot, err := restic.LoadSnapshot(ctx, repo, id)
return restic.ForAllSnapshots(ctx, repo, nil, func(id restic.ID, snapshot *restic.Snapshot, err error) error {
if err != nil {
return err
}
@@ -87,7 +86,7 @@ func printPacks(ctx context.Context, repo *repository.Repository, wr io.Writer)
return repo.List(ctx, restic.PackFile, func(id restic.ID, size int64) error {
h := restic.Handle{Type: restic.PackFile, Name: id.String()}
blobs, err := pack.List(repo.Key(), restic.ReaderAt(ctx, repo.Backend(), h), size)
blobs, _, err := pack.List(repo.Key(), restic.ReaderAt(ctx, repo.Backend(), h), size)
if err != nil {
Warnf("error for pack %v: %v\n", id.Str(), err)
return nil
@@ -111,10 +110,8 @@ func printPacks(ctx context.Context, repo *repository.Repository, wr io.Writer)
}
func dumpIndexes(ctx context.Context, repo restic.Repository, wr io.Writer) error {
return repo.List(ctx, restic.IndexFile, func(id restic.ID, size int64) error {
return repository.ForAllIndexes(ctx, repo, func(id restic.ID, idx *repository.Index, oldFormat bool, err error) error {
Printf("index_id: %v\n", id)
idx, err := repository.LoadIndex(ctx, repo, id)
if err != nil {
return err
}

View File

@@ -55,9 +55,8 @@ func init() {
func loadSnapshot(ctx context.Context, repo *repository.Repository, desc string) (*restic.Snapshot, error) {
id, err := restic.FindSnapshot(ctx, repo, desc)
if err != nil {
return nil, err
return nil, errors.Fatal(err.Error())
}
return restic.LoadSnapshot(ctx, repo, id)
}
@@ -365,6 +364,8 @@ func runDiff(opts DiffOptions, gopts GlobalOptions, args []string) error {
}
stats := NewDiffStats()
stats.BlobsBefore.Insert(restic.BlobHandle{Type: restic.TreeBlob, ID: *sn1.Tree})
stats.BlobsAfter.Insert(restic.BlobHandle{Type: restic.TreeBlob, ID: *sn2.Tree})
err = c.diffTree(ctx, stats, "/", *sn1.Tree, *sn2.Tree)
if err != nil {

View File

@@ -21,8 +21,8 @@ var cmdDump = &cobra.Command{
Long: `
The "dump" command extracts files from a snapshot from the repository. If a
single file is selected, it prints its contents to stdout. Folders are output
as a tar file containing the contents of the specified folder. Pass "/" as
file name to dump the whole snapshot as a tar file.
as a tar (default) or zip file containing the contents of the specified folder.
Pass "/" as file name to dump the whole snapshot as an archive file.
The special snapshot "latest" can be used to use the latest snapshot in the
repository.
@@ -40,9 +40,10 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
// DumpOptions collects all options for the dump command.
type DumpOptions struct {
Hosts []string
Paths []string
Tags restic.TagLists
Hosts []string
Paths []string
Tags restic.TagLists
Archive string
}
var dumpOptions DumpOptions
@@ -54,6 +55,7 @@ func init() {
flags.StringArrayVarP(&dumpOptions.Hosts, "host", "H", nil, `only consider snapshots for this host when the snapshot ID is "latest" (can be specified multiple times)`)
flags.Var(&dumpOptions.Tags, "tag", "only consider snapshots which include this `taglist` for snapshot ID \"latest\"")
flags.StringArrayVar(&dumpOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path` for snapshot ID \"latest\"")
flags.StringVarP(&dumpOptions.Archive, "archive", "a", "tar", "set archive `format` as \"tar\" or \"zip\"")
}
func splitPath(p string) []string {
@@ -65,8 +67,7 @@ func splitPath(p string) []string {
return append(s, f)
}
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repository, prefix string, pathComponents []string) error {
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repository, prefix string, pathComponents []string, writeDump dump.WriteDump) error {
if tree == nil {
return fmt.Errorf("called with a nil tree")
}
@@ -81,10 +82,10 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repositor
// If we print / we need to assume that there are multiple nodes at that
// level in the tree.
if pathComponents[0] == "" {
if err := checkStdoutTar(); err != nil {
if err := checkStdoutArchive(); err != nil {
return err
}
return dump.WriteTar(ctx, repo, tree, "/", os.Stdout)
return writeDump(ctx, repo, tree, "/", os.Stdout)
}
item := filepath.Join(prefix, pathComponents[0])
@@ -100,16 +101,16 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repositor
if err != nil {
return errors.Wrapf(err, "cannot load subtree for %q", item)
}
return printFromTree(ctx, subtree, repo, item, pathComponents[1:])
return printFromTree(ctx, subtree, repo, item, pathComponents[1:], writeDump)
case dump.IsDir(node):
if err := checkStdoutTar(); err != nil {
if err := checkStdoutArchive(); err != nil {
return err
}
subtree, err := repo.LoadTree(ctx, *node.Subtree)
if err != nil {
return err
}
return dump.WriteTar(ctx, repo, subtree, item, os.Stdout)
return writeDump(ctx, repo, subtree, item, os.Stdout)
case l > 1:
return fmt.Errorf("%q should be a dir, but is a %q", item, node.Type)
case !dump.IsFile(node):
@@ -127,6 +128,16 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
return errors.Fatal("no file and no snapshot ID specified")
}
var wd dump.WriteDump
switch opts.Archive {
case "tar":
wd = dump.WriteTar
case "zip":
wd = dump.WriteZip
default:
return fmt.Errorf("unknown archive format %q", opts.Archive)
}
snapshotIDString := args[0]
pathToPrint := args[1]
@@ -176,7 +187,7 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
Exitf(2, "loading tree for snapshot %q failed: %v", snapshotIDString, err)
}
err = printFromTree(ctx, tree, repo, "/", splittedPath)
err = printFromTree(ctx, tree, repo, "/", splittedPath, wd)
if err != nil {
Exitf(2, "cannot dump file: %v", err)
}
@@ -184,7 +195,7 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
return nil
}
func checkStdoutTar() error {
func checkStdoutArchive() error {
if stdoutIsTerminal() {
return fmt.Errorf("stdout is the terminal, please redirect output")
}

View File

@@ -394,7 +394,6 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
delete(f.blobIDs, idStr[:shortStr])
}
f.out.PrintObject("blob", idStr, nodepath, parentTreeID.String(), sn)
break
}
}
@@ -465,7 +464,7 @@ func (f *Finder) findObjectPack(ctx context.Context, id string, t restic.BlobTyp
return
}
blobs := idx.Lookup(rid, t)
blobs := idx.Lookup(restic.BlobHandle{ID: rid, Type: t})
if len(blobs) == 0 {
Printf("Object %s not found in the index\n", rid.Str())
return
@@ -564,7 +563,10 @@ func runFind(opts FindOptions, gopts GlobalOptions, args []string) error {
}
if opts.PackID {
f.packsToBlobs(ctx, []string{f.pat.pattern[0]}) // TODO: support multiple packs
err := f.packsToBlobs(ctx, []string{f.pat.pattern[0]}) // TODO: support multiple packs
if err != nil {
return err
}
}
for sn := range FindFilteredSnapshots(ctx, repo, opts.Hosts, opts.Tags, opts.Paths, opts.Snapshots) {

View File

@@ -68,7 +68,11 @@ func init() {
f.Var(&forgetOptions.KeepTags, "keep-tag", "keep snapshots with this `taglist` (can be specified multiple times)")
f.StringArrayVar(&forgetOptions.Hosts, "host", nil, "only consider snapshots with the given `host` (can be specified multiple times)")
f.StringArrayVar(&forgetOptions.Hosts, "hostname", nil, "only consider snapshots with the given `hostname` (can be specified multiple times)")
f.MarkDeprecated("hostname", "use --host")
err := f.MarkDeprecated("hostname", "use --host")
if err != nil {
// MarkDeprecated only returns an error when the flag is not found
panic(err)
}
f.Var(&forgetOptions.Tags, "tag", "only consider snapshots which include this `taglist` in the format `tag[,tag,...]` (can be specified multiple times)")
@@ -80,9 +84,15 @@ func init() {
f.BoolVar(&forgetOptions.Prune, "prune", false, "automatically run the 'prune' command if snapshots have been removed")
f.SortFlags = false
addPruneOptions(cmdForget)
}
func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
err := verifyPruneOptions(&pruneOptions)
if err != nil {
return err
}
repo, err := OpenRepository(gopts)
if err != nil {
return err
@@ -204,8 +214,12 @@ func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
}
}
if len(removeSnIDs) > 0 && opts.Prune && !opts.DryRun {
return pruneRepository(gopts, repo)
if len(removeSnIDs) > 0 && opts.Prune {
if !gopts.JSON {
Verbosef("%d snapshots have been removed, running prune\n", len(removeSnIDs))
}
pruneOptions.DryRun = opts.DryRun
return runPruneWithRepo(pruneOptions, gopts, repo, removeSnIDs)
}
return nil

View File

@@ -43,10 +43,6 @@ func init() {
}
func runInit(opts InitOptions, gopts GlobalOptions, args []string) error {
if gopts.Repo == "" {
return errors.Fatal("Please specify repository location (-r)")
}
chunkerPolynomial, err := maybeReadChunkerPolynomial(opts, gopts)
if err != nil {
return err

View File

@@ -60,8 +60,7 @@ func runList(cmd *cobra.Command, opts GlobalOptions, args []string) error {
case "locks":
t = restic.LockFile
case "blobs":
return repo.List(opts.ctx, restic.IndexFile, func(id restic.ID, size int64) error {
idx, err := repository.LoadIndex(opts.ctx, repo, id)
return repository.ForAllIndexes(opts.ctx, repo, func(id restic.ID, idx *repository.Index, oldFormat bool, err error) error {
if err != nil {
return err
}
@@ -70,7 +69,6 @@ func runList(cmd *cobra.Command, opts GlobalOptions, args []string) error {
}
return nil
})
default:
return errors.Fatal("invalid type")
}

View File

@@ -159,16 +159,19 @@ func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
enc := json.NewEncoder(gopts.stdout)
printSnapshot = func(sn *restic.Snapshot) {
enc.Encode(lsSnapshot{
err = enc.Encode(lsSnapshot{
Snapshot: sn,
ID: sn.ID(),
ShortID: sn.ID().Str(),
StructType: "snapshot",
})
if err != nil {
Warnf("JSON encode failed: %v\n", err)
}
}
printNode = func(path string, node *restic.Node) {
enc.Encode(lsNode{
err = enc.Encode(lsNode{
Name: node.Name,
Type: node.Type,
Path: path,
@@ -181,6 +184,9 @@ func runLs(opts LsOptions, gopts GlobalOptions, args []string) error {
ChangeTime: node.ChangeTime,
StructType: "node",
})
if err != nil {
Warnf("JSON encode failed: %v\n", err)
}
}
} else {
printSnapshot = func(sn *restic.Snapshot) {

View File

@@ -116,11 +116,8 @@ func runMount(opts MountOptions, gopts GlobalOptions, args []string) error {
mountpoint := args[0]
if _, err := resticfs.Stat(mountpoint); os.IsNotExist(errors.Cause(err)) {
Verbosef("Mountpoint %s doesn't exist, creating it\n", mountpoint)
err = resticfs.Mkdir(mountpoint, os.ModeDir|0700)
if err != nil {
return err
}
Verbosef("Mountpoint %s doesn't exist\n", mountpoint)
return err
}
mountOptions := []systemFuse.MountOption{
systemFuse.ReadOnly(),

View File

@@ -23,8 +23,14 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
fmt.Printf("All Extended Options:\n")
var maxLen int
for _, opt := range options.List() {
fmt.Printf(" %-15s %s\n", opt.Namespace+"."+opt.Name, opt.Text)
if l := len(opt.Namespace + "." + opt.Name); l > maxLen {
maxLen = l
}
}
for _, opt := range options.List() {
fmt.Printf(" %*s %s\n", -maxLen, opt.Namespace+"."+opt.Name, opt.Text)
}
},
}

View File

@@ -1,15 +1,23 @@
package main
import (
"math"
"sort"
"strconv"
"strings"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/index"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/spf13/cobra"
)
var errorIndexIncomplete = errors.Fatal("index is not complete")
var errorPacksMissing = errors.Fatal("packs from index missing in repo")
var errorSizeNotMatching = errors.Fatal("pack size does not match calculated size from index")
var cmdPrune = &cobra.Command{
Use: "prune [flags]",
Short: "Remove unneeded data from the repository",
@@ -24,12 +32,91 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runPrune(globalOptions)
return runPrune(pruneOptions, globalOptions)
},
}
// PruneOptions collects all options for the cleanup command.
type PruneOptions struct {
DryRun bool
MaxUnused string
maxUnusedBytes func(used uint64) (unused uint64) // calculates the number of unused bytes after repacking, according to MaxUnused
MaxRepackSize string
MaxRepackBytes uint64
RepackCachableOnly bool
}
var pruneOptions PruneOptions
func init() {
cmdRoot.AddCommand(cmdPrune)
f := cmdPrune.Flags()
f.BoolVarP(&pruneOptions.DryRun, "dry-run", "n", false, "do not modify the repository, just print what would be done")
addPruneOptions(cmdPrune)
}
func addPruneOptions(c *cobra.Command) {
f := c.Flags()
f.StringVar(&pruneOptions.MaxUnused, "max-unused", "5%", "tolerate given `limit` of unused data (absolute value in bytes with suffixes k/K, m/M, g/G, t/T, a value in % or the word 'unlimited')")
f.StringVar(&pruneOptions.MaxRepackSize, "max-repack-size", "", "maximum `size` to repack (allowed suffixes: k/K, m/M, g/G, t/T)")
f.BoolVar(&pruneOptions.RepackCachableOnly, "repack-cacheable-only", false, "only repack packs which are cacheable")
}
func verifyPruneOptions(opts *PruneOptions) error {
if len(opts.MaxRepackSize) > 0 {
size, err := parseSizeStr(opts.MaxRepackSize)
if err != nil {
return err
}
opts.MaxRepackBytes = uint64(size)
}
maxUnused := strings.TrimSpace(opts.MaxUnused)
if maxUnused == "" {
return errors.Fatalf("invalid value for --max-unused: %q", opts.MaxUnused)
}
// parse MaxUnused either as unlimited, a percentage, or an absolute number of bytes
switch {
case maxUnused == "unlimited":
opts.maxUnusedBytes = func(used uint64) uint64 {
return math.MaxUint64
}
case strings.HasSuffix(maxUnused, "%"):
maxUnused = strings.TrimSuffix(maxUnused, "%")
p, err := strconv.ParseFloat(maxUnused, 64)
if err != nil {
return errors.Fatalf("invalid percentage %q passed for --max-unused: %v", opts.MaxUnused, err)
}
if p < 0 {
return errors.Fatal("percentage for --max-unused must be positive")
}
if p >= 100 {
return errors.Fatal("percentage for --max-unused must be below 100%")
}
opts.maxUnusedBytes = func(used uint64) uint64 {
return uint64(p / (100 - p) * float64(used))
}
default:
size, err := parseSizeStr(maxUnused)
if err != nil {
return errors.Fatalf("invalid number of bytes %q for --max-unused: %v", opts.MaxUnused, err)
}
opts.maxUnusedBytes = func(used uint64) uint64 {
return uint64(size)
}
}
return nil
}
func shortenStatus(maxLength int, s string) string {
@@ -44,7 +131,12 @@ func shortenStatus(maxLength int, s string) string {
return s[:maxLength-3] + "..."
}
func runPrune(gopts GlobalOptions) error {
func runPrune(opts PruneOptions, gopts GlobalOptions) error {
err := verifyPruneOptions(&opts)
if err != nil {
return err
}
repo, err := OpenRepository(gopts)
if err != nil {
return err
@@ -56,203 +148,398 @@ func runPrune(gopts GlobalOptions) error {
return err
}
return runPruneWithRepo(opts, gopts, repo, restic.NewIDSet())
}
func runPruneWithRepo(opts PruneOptions, gopts GlobalOptions, repo *repository.Repository, ignoreSnapshots restic.IDSet) error {
// we do not need index updates while pruning!
repo.DisableAutoIndexUpdate()
return pruneRepository(gopts, repo)
}
func mixedBlobs(list []restic.Blob) bool {
var tree, data bool
for _, pb := range list {
switch pb.Type {
case restic.TreeBlob:
tree = true
case restic.DataBlob:
data = true
}
if tree && data {
return true
}
if repo.Cache == nil {
Print("warning: running prune without a cache, this may be very slow!\n")
}
return false
Verbosef("loading indexes...\n")
err := repo.LoadIndex(gopts.ctx)
if err != nil {
return err
}
usedBlobs, err := getUsedBlobs(gopts, repo, ignoreSnapshots)
if err != nil {
return err
}
return prune(opts, gopts, repo, usedBlobs)
}
func pruneRepository(gopts GlobalOptions, repo restic.Repository) error {
type packInfo struct {
usedBlobs uint
unusedBlobs uint
duplicateBlobs uint
usedSize uint64
unusedSize uint64
tpe restic.BlobType
}
type packInfoWithID struct {
ID restic.ID
packInfo
}
// prune selects which files to rewrite and then does that. The map usedBlobs is
// modified in the process.
func prune(opts PruneOptions, gopts GlobalOptions, repo restic.Repository, usedBlobs restic.BlobSet) error {
ctx := gopts.ctx
err := repo.LoadIndex(ctx)
if err != nil {
return err
}
var stats struct {
blobs int
packs int
snapshots int
bytes int64
blobs struct {
used uint
duplicate uint
unused uint
remove uint
repack uint
repackrm uint
}
size struct {
used uint64
duplicate uint64
unused uint64
remove uint64
repack uint64
repackrm uint64
unref uint64
}
packs struct {
used uint
unused uint
partlyUsed uint
keep uint
}
}
Verbosef("counting files in repo\n")
err = repo.List(ctx, restic.PackFile, func(restic.ID, int64) error {
stats.packs++
Verbosef("searching used packs...\n")
keepBlobs := restic.NewBlobSet()
duplicateBlobs := restic.NewBlobSet()
// iterate over all blobs in index to find out which blobs are duplicates
for blob := range repo.Index().Each(ctx) {
bh := blob.BlobHandle
size := uint64(blob.Length)
switch {
case usedBlobs.Has(bh): // used blob, move to keepBlobs
usedBlobs.Delete(bh)
keepBlobs.Insert(bh)
stats.size.used += size
stats.blobs.used++
case keepBlobs.Has(bh): // duplicate blob
duplicateBlobs.Insert(bh)
stats.size.duplicate += size
stats.blobs.duplicate++
default:
stats.size.unused += size
stats.blobs.unused++
}
}
// Check if all used blobs have been found in index
if len(usedBlobs) != 0 {
Warnf("%v not found in the index\n\n"+
"Integrity check failed: Data seems to be missing.\n"+
"Will not start prune to prevent (additional) data loss!\n"+
"Please report this error (along with the output of the 'prune' run) at\n"+
"https://github.com/restic/restic/issues/new/choose", usedBlobs)
return errorIndexIncomplete
}
indexPack := make(map[restic.ID]packInfo)
// save computed pack header size
for pid, hdrSize := range repo.Index().PackSize(ctx, true) {
// initialize tpe with NumBlobTypes to indicate it's not set
indexPack[pid] = packInfo{tpe: restic.NumBlobTypes, usedSize: uint64(hdrSize)}
}
// iterate over all blobs in index to generate packInfo
for blob := range repo.Index().Each(ctx) {
ip := indexPack[blob.PackID]
// Set blob type if not yet set
if ip.tpe == restic.NumBlobTypes {
ip.tpe = blob.Type
}
// mark mixed packs with "Invalid blob type"
if ip.tpe != blob.Type {
ip.tpe = restic.InvalidBlob
}
bh := blob.BlobHandle
size := uint64(blob.Length)
switch {
case duplicateBlobs.Has(bh): // duplicate blob
ip.usedSize += size
ip.duplicateBlobs++
case keepBlobs.Has(bh): // used blob, not duplicate
ip.usedSize += size
ip.usedBlobs++
default: // unused blob
ip.unusedSize += size
ip.unusedBlobs++
}
// update indexPack
indexPack[blob.PackID] = ip
}
Verbosef("collecting packs for deletion and repacking\n")
removePacksFirst := restic.NewIDSet()
removePacks := restic.NewIDSet()
repackPacks := restic.NewIDSet()
var repackCandidates []packInfoWithID
repackAllPacksWithDuplicates := true
keep := func(p packInfo) {
stats.packs.keep++
if p.duplicateBlobs > 0 {
repackAllPacksWithDuplicates = false
}
}
// loop over all packs and decide what to do
bar := newProgressMax(!gopts.Quiet, uint64(len(indexPack)), "packs processed")
err := repo.List(ctx, restic.PackFile, func(id restic.ID, packSize int64) error {
p, ok := indexPack[id]
if !ok {
// Pack was not referenced in index and is not used => immediately remove!
Verboseff("will remove pack %v as it is unused and not indexed\n", id.Str())
removePacksFirst.Insert(id)
stats.size.unref += uint64(packSize)
return nil
}
if p.unusedSize+p.usedSize != uint64(packSize) &&
!(p.usedBlobs == 0 && p.duplicateBlobs == 0) {
// Pack size does not fit and pack is needed => error
// If the pack is not needed, this is no error, the pack can
// and will be simply removed, see below.
Warnf("pack %s: calculated size %d does not match real size %d\nRun 'restic rebuild-index'.",
id.Str(), p.unusedSize+p.usedSize, packSize)
return errorSizeNotMatching
}
// statistics
switch {
case p.usedBlobs == 0 && p.duplicateBlobs == 0:
stats.packs.unused++
case p.unusedBlobs == 0:
stats.packs.used++
default:
stats.packs.partlyUsed++
}
// decide what to do
switch {
case p.usedBlobs == 0 && p.duplicateBlobs == 0:
// All blobs in pack are no longer used => remove pack!
removePacks.Insert(id)
stats.blobs.remove += p.unusedBlobs
stats.size.remove += p.unusedSize
case opts.RepackCachableOnly && p.tpe == restic.DataBlob:
// if this is a data pack and --repack-cacheable-only is set => keep pack!
keep(p)
case p.unusedBlobs == 0 && p.duplicateBlobs == 0 && p.tpe != restic.InvalidBlob:
// All blobs in pack are used and not duplicates/mixed => keep pack!
keep(p)
default:
// all other packs are candidates for repacking
repackCandidates = append(repackCandidates, packInfoWithID{ID: id, packInfo: p})
}
delete(indexPack, id)
bar.Add(1)
return nil
})
bar.Done()
if err != nil {
return err
}
Verbosef("building new index for repo\n")
// At this point indexPacks contains only missing packs!
bar := newProgressMax(!gopts.Quiet, uint64(stats.packs), "packs")
idx, invalidFiles, err := index.New(ctx, repo, restic.NewIDSet(), bar)
if err != nil {
return err
// missing packs that are not needed can be ignored
ignorePacks := restic.NewIDSet()
for id, p := range indexPack {
if p.usedBlobs == 0 && p.duplicateBlobs == 0 {
ignorePacks.Insert(id)
stats.blobs.remove += p.unusedBlobs
stats.size.remove += p.unusedSize
delete(indexPack, id)
}
}
for _, id := range invalidFiles {
Warnf("incomplete pack file (will be removed): %v\n", id)
if len(indexPack) != 0 {
Warnf("The index references %d needed pack files which are missing from the repository:\n", len(indexPack))
for id := range indexPack {
Warnf(" %v\n", id)
}
return errorPacksMissing
}
if len(ignorePacks) != 0 {
Warnf("Missing but unneeded pack files are referenced in the index, will be repaired\n")
for id := range ignorePacks {
Warnf("will forget missing pack file %v\n", id)
}
}
blobs := 0
for _, pack := range idx.Packs {
stats.bytes += pack.Size
blobs += len(pack.Entries)
// calculate limit for number of unused bytes in the repo after repacking
maxUnusedSizeAfter := opts.maxUnusedBytes(stats.size.used)
// Sort repackCandidates such that packs with highest ratio unused/used space are picked first.
// This is equivalent to sorting by unused / total space.
// Instead of unused[i] / used[i] > unused[j] / used[j] we use
// unused[i] * used[j] > unused[j] * used[i] as uint32*uint32 < uint64
// Morover duplicates and packs containing trees are sorted to the beginning
sort.Slice(repackCandidates, func(i, j int) bool {
pi := repackCandidates[i].packInfo
pj := repackCandidates[j].packInfo
switch {
case pi.duplicateBlobs > 0 && pj.duplicateBlobs == 0:
return true
case pj.duplicateBlobs > 0 && pi.duplicateBlobs == 0:
return false
case pi.tpe != restic.DataBlob && pj.tpe == restic.DataBlob:
return true
case pj.tpe != restic.DataBlob && pi.tpe == restic.DataBlob:
return false
}
return pi.unusedSize*pj.usedSize > pj.unusedSize*pi.usedSize
})
repack := func(id restic.ID, p packInfo) {
repackPacks.Insert(id)
stats.blobs.repack += p.unusedBlobs + p.duplicateBlobs + p.usedBlobs
stats.size.repack += p.unusedSize + p.usedSize
stats.blobs.repackrm += p.unusedBlobs
stats.size.repackrm += p.unusedSize
}
Verbosef("repository contains %v packs (%v blobs) with %v\n",
len(idx.Packs), blobs, formatBytes(uint64(stats.bytes)))
blobCount := make(map[restic.BlobHandle]int)
var duplicateBlobs uint64
var duplicateBytes uint64
for _, p := range repackCandidates {
reachedUnusedSizeAfter := (stats.size.unused-stats.size.remove-stats.size.repackrm < maxUnusedSizeAfter)
// find duplicate blobs
for _, p := range idx.Packs {
for _, entry := range p.Entries {
stats.blobs++
h := restic.BlobHandle{ID: entry.ID, Type: entry.Type}
blobCount[h]++
reachedRepackSize := false
if opts.MaxRepackBytes > 0 {
reachedRepackSize = stats.size.repack+p.unusedSize+p.usedSize > opts.MaxRepackBytes
}
if blobCount[h] > 1 {
duplicateBlobs++
duplicateBytes += uint64(entry.Length)
switch {
case reachedRepackSize:
keep(p.packInfo)
case p.duplicateBlobs > 0, p.tpe != restic.DataBlob:
// repacking duplicates/non-data is only limited by repackSize
repack(p.ID, p.packInfo)
case reachedUnusedSizeAfter:
// for all other packs stop repacking if tolerated unused size is reached.
keep(p.packInfo)
default:
repack(p.ID, p.packInfo)
}
}
// if all duplicates are repacked, print out correct statistics
if repackAllPacksWithDuplicates {
stats.blobs.repackrm += stats.blobs.duplicate
stats.size.repackrm += stats.size.duplicate
}
Verboseff("\nused: %10d blobs / %s\n", stats.blobs.used, formatBytes(stats.size.used))
if stats.blobs.duplicate > 0 {
Verboseff("duplicates: %10d blobs / %s\n", stats.blobs.duplicate, formatBytes(stats.size.duplicate))
}
Verboseff("unused: %10d blobs / %s\n", stats.blobs.unused, formatBytes(stats.size.unused))
if stats.size.unref > 0 {
Verboseff("unreferenced: %s\n", formatBytes(stats.size.unref))
}
totalBlobs := stats.blobs.used + stats.blobs.unused + stats.blobs.duplicate
totalSize := stats.size.used + stats.size.duplicate + stats.size.unused + stats.size.unref
unusedSize := stats.size.duplicate + stats.size.unused
Verboseff("total: %10d blobs / %s\n", totalBlobs, formatBytes(totalSize))
Verboseff("unused size: %s of total size\n", formatPercent(unusedSize, totalSize))
Verbosef("\nto repack: %10d blobs / %s\n", stats.blobs.repack, formatBytes(stats.size.repack))
Verbosef("this removes %10d blobs / %s\n", stats.blobs.repackrm, formatBytes(stats.size.repackrm))
Verbosef("to delete: %10d blobs / %s\n", stats.blobs.remove, formatBytes(stats.size.remove+stats.size.unref))
totalPruneSize := stats.size.remove + stats.size.repackrm + stats.size.unref
Verbosef("total prune: %10d blobs / %s\n", stats.blobs.remove+stats.blobs.repackrm, formatBytes(totalPruneSize))
Verbosef("remaining: %10d blobs / %s\n", totalBlobs-(stats.blobs.remove+stats.blobs.repackrm), formatBytes(totalSize-totalPruneSize))
unusedAfter := unusedSize - stats.size.remove - stats.size.repackrm
Verbosef("unused size after prune: %s (%s of remaining size)\n",
formatBytes(unusedAfter), formatPercent(unusedAfter, totalSize-totalPruneSize))
Verbosef("\n")
Verboseff("totally used packs: %10d\n", stats.packs.used)
Verboseff("partly used packs: %10d\n", stats.packs.partlyUsed)
Verboseff("unused packs: %10d\n\n", stats.packs.unused)
Verboseff("to keep: %10d packs\n", stats.packs.keep)
Verboseff("to repack: %10d packs\n", len(repackPacks))
Verboseff("to delete: %10d packs\n", len(removePacks))
if len(removePacksFirst) > 0 {
Verboseff("to delete: %10d unreferenced packs\n\n", len(removePacksFirst))
}
if opts.DryRun {
if !gopts.JSON && gopts.verbosity >= 2 {
if len(removePacksFirst) > 0 {
Printf("Would have removed the following unreferenced packs:\n%v\n\n", removePacksFirst)
}
Printf("Would have repacked and removed the following packs:\n%v\n\n", repackPacks)
Printf("Would have removed the following no longer used packs:\n%v\n\n", removePacks)
}
// Always quit here if DryRun was set!
return nil
}
Verbosef("processed %d blobs: %d duplicate blobs, %v duplicate\n",
stats.blobs, duplicateBlobs, formatBytes(uint64(duplicateBytes)))
Verbosef("load all snapshots\n")
// find referenced blobs
snapshots, err := restic.LoadAllSnapshots(ctx, repo)
if err != nil {
return err
// unreferenced packs can be safely deleted first
if len(removePacksFirst) != 0 {
Verbosef("deleting unreferenced packs\n")
DeleteFiles(gopts, repo, removePacksFirst, restic.PackFile)
}
stats.snapshots = len(snapshots)
usedBlobs, err := getUsedBlobs(gopts, repo, snapshots)
if err != nil {
return err
}
var missingBlobs []restic.BlobHandle
for h := range usedBlobs {
if _, ok := blobCount[h]; !ok {
missingBlobs = append(missingBlobs, h)
}
}
if len(missingBlobs) > 0 {
return errors.Fatalf("%v not found in the new index\n"+
"Data blobs seem to be missing, aborting prune to prevent further data loss!\n"+
"Please report this error (along with the output of the 'prune' run) at\n"+
"https://github.com/restic/restic/issues/new/choose", missingBlobs)
}
Verbosef("found %d of %d data blobs still in use, removing %d blobs\n",
len(usedBlobs), stats.blobs, stats.blobs-len(usedBlobs))
// find packs that need a rewrite
rewritePacks := restic.NewIDSet()
for _, pack := range idx.Packs {
if mixedBlobs(pack.Entries) {
rewritePacks.Insert(pack.ID)
continue
}
for _, blob := range pack.Entries {
h := restic.BlobHandle{ID: blob.ID, Type: blob.Type}
if !usedBlobs.Has(h) {
rewritePacks.Insert(pack.ID)
continue
}
if blobCount[h] > 1 {
rewritePacks.Insert(pack.ID)
}
}
}
removeBytes := duplicateBytes
// find packs that are unneeded
removePacks := restic.NewIDSet()
Verbosef("will remove %d invalid files\n", len(invalidFiles))
for _, id := range invalidFiles {
removePacks.Insert(id)
}
for packID, p := range idx.Packs {
hasActiveBlob := false
for _, blob := range p.Entries {
h := restic.BlobHandle{ID: blob.ID, Type: blob.Type}
if usedBlobs.Has(h) {
hasActiveBlob = true
continue
}
removeBytes += uint64(blob.Length)
}
if hasActiveBlob {
continue
}
removePacks.Insert(packID)
if !rewritePacks.Has(packID) {
return errors.Fatalf("pack %v is unneeded, but not contained in rewritePacks", packID.Str())
}
rewritePacks.Delete(packID)
}
Verbosef("will delete %d packs and rewrite %d packs, this frees %s\n",
len(removePacks), len(rewritePacks), formatBytes(uint64(removeBytes)))
var obsoletePacks restic.IDSet
if len(rewritePacks) != 0 {
bar := newProgressMax(!gopts.Quiet, uint64(len(rewritePacks)), "packs rewritten")
obsoletePacks, err = repository.Repack(ctx, repo, rewritePacks, usedBlobs, bar)
if len(repackPacks) != 0 {
Verbosef("repacking packs\n")
bar := newProgressMax(!gopts.Quiet, uint64(len(repackPacks)), "packs repacked")
_, err := repository.Repack(ctx, repo, repackPacks, keepBlobs, bar)
bar.Done()
if err != nil {
return err
return errors.Fatalf("%s", err)
}
// Also remove repacked packs
removePacks.Merge(repackPacks)
}
removePacks.Merge(obsoletePacks)
if len(ignorePacks) == 0 {
ignorePacks = removePacks
} else {
ignorePacks.Merge(removePacks)
}
if err = rebuildIndex(ctx, repo, removePacks); err != nil {
return err
if len(ignorePacks) != 0 {
err = rebuildIndexFiles(gopts, repo, ignorePacks, nil)
if err != nil {
return errors.Fatalf("%s", err)
}
}
if len(removePacks) != 0 {
Verbosef("remove %d old packs\n", len(removePacks))
Verbosef("removing %d old packs\n", len(removePacks))
DeleteFiles(gopts, repo, removePacks, restic.PackFile)
}
@@ -260,30 +547,54 @@ func pruneRepository(gopts GlobalOptions, repo restic.Repository) error {
return nil
}
func getUsedBlobs(gopts GlobalOptions, repo restic.Repository, snapshots []*restic.Snapshot) (usedBlobs restic.BlobSet, err error) {
func rebuildIndexFiles(gopts GlobalOptions, repo restic.Repository, removePacks restic.IDSet, extraObsolete restic.IDs) error {
Verbosef("rebuilding index\n")
idx := (repo.Index()).(*repository.MasterIndex)
packcount := uint64(len(idx.Packs(removePacks)))
bar := newProgressMax(!gopts.Quiet, packcount, "packs processed")
obsoleteIndexes, err := idx.Save(gopts.ctx, repo, removePacks, extraObsolete, bar)
bar.Done()
if err != nil {
return err
}
Verbosef("deleting obsolete index files\n")
return DeleteFilesChecked(gopts, repo, obsoleteIndexes, restic.IndexFile)
}
func getUsedBlobs(gopts GlobalOptions, repo restic.Repository, ignoreSnapshots restic.IDSet) (usedBlobs restic.BlobSet, err error) {
ctx := gopts.ctx
Verbosef("find data that is still in use for %d snapshots\n", len(snapshots))
var snapshotTrees restic.IDs
Verbosef("loading all snapshots...\n")
err = restic.ForAllSnapshots(gopts.ctx, repo, ignoreSnapshots,
func(id restic.ID, sn *restic.Snapshot, err error) error {
debug.Log("add snapshot %v (tree %v, error %v)", id, *sn.Tree, err)
if err != nil {
return err
}
snapshotTrees = append(snapshotTrees, *sn.Tree)
return nil
})
if err != nil {
return nil, err
}
Verbosef("finding data that is still in use for %d snapshots\n", len(snapshotTrees))
usedBlobs = restic.NewBlobSet()
bar := newProgressMax(!gopts.Quiet, uint64(len(snapshots)), "snapshots")
bar.Start()
bar := newProgressMax(!gopts.Quiet, uint64(len(snapshotTrees)), "snapshots")
defer bar.Done()
for _, sn := range snapshots {
debug.Log("process snapshot %v", sn.ID())
err = restic.FindUsedBlobs(ctx, repo, *sn.Tree, usedBlobs)
if err != nil {
if repo.Backend().IsNotExist(err) {
return nil, errors.Fatal("unable to load a tree from the repo: " + err.Error())
}
return nil, err
err = restic.FindUsedBlobs(ctx, repo, snapshotTrees, usedBlobs, bar)
if err != nil {
if repo.Backend().IsNotExist(err) {
return nil, errors.Fatal("unable to load a tree from the repo: " + err.Error())
}
debug.Log("processed snapshot %v", sn.ID())
bar.Report(restic.Stat{Blobs: 1})
return nil, err
}
return usedBlobs, nil
}

View File

@@ -1,10 +1,7 @@
package main
import (
"context"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/index"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/spf13/cobra"
@@ -12,7 +9,7 @@ import (
var cmdRebuildIndex = &cobra.Command{
Use: "rebuild-index [flags]",
Short: "Build a new index file",
Short: "Build a new index",
Long: `
The "rebuild-index" command creates a new index based on the pack files in the
repository.
@@ -24,15 +21,25 @@ Exit status is 0 if the command was successful, and non-zero if there was any er
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runRebuildIndex(globalOptions)
return runRebuildIndex(rebuildIndexOptions, globalOptions)
},
}
func init() {
cmdRoot.AddCommand(cmdRebuildIndex)
// RebuildIndexOptions collects all options for the rebuild-index command.
type RebuildIndexOptions struct {
ReadAllPacks bool
}
func runRebuildIndex(gopts GlobalOptions) error {
var rebuildIndexOptions RebuildIndexOptions
func init() {
cmdRoot.AddCommand(cmdRebuildIndex)
f := cmdRebuildIndex.Flags()
f.BoolVar(&rebuildIndexOptions.ReadAllPacks, "read-all-packs", false, "read all pack files to generate new index from scratch")
}
func runRebuildIndex(opts RebuildIndexOptions, gopts GlobalOptions) error {
repo, err := OpenRepository(gopts)
if err != nil {
return err
@@ -44,58 +51,80 @@ func runRebuildIndex(gopts GlobalOptions) error {
return err
}
ctx, cancel := context.WithCancel(gopts.ctx)
defer cancel()
return rebuildIndex(ctx, repo, restic.NewIDSet())
return rebuildIndex(opts, gopts, repo, restic.NewIDSet())
}
func rebuildIndex(ctx context.Context, repo restic.Repository, ignorePacks restic.IDSet) error {
Verbosef("counting files in repo\n")
func rebuildIndex(opts RebuildIndexOptions, gopts GlobalOptions, repo *repository.Repository, ignorePacks restic.IDSet) error {
ctx := gopts.ctx
var packs uint64
err := repo.List(ctx, restic.PackFile, func(restic.ID, int64) error {
packs++
var obsoleteIndexes restic.IDs
packSizeFromList := make(map[restic.ID]int64)
packSizeFromIndex := make(map[restic.ID]int64)
removePacks := restic.NewIDSet()
if opts.ReadAllPacks {
// get list of old index files but start with empty index
err := repo.List(ctx, restic.IndexFile, func(id restic.ID, size int64) error {
obsoleteIndexes = append(obsoleteIndexes, id)
return nil
})
if err != nil {
return err
}
} else {
Verbosef("loading indexes...\n")
err := repo.LoadIndex(gopts.ctx)
if err != nil {
return err
}
packSizeFromIndex = repo.Index().PackSize(ctx, false)
}
Verbosef("getting pack files to read...\n")
err := repo.List(ctx, restic.PackFile, func(id restic.ID, packSize int64) error {
size, ok := packSizeFromIndex[id]
if !ok || size != packSize {
// Pack was not referenced in index or size does not match
packSizeFromList[id] = packSize
removePacks.Insert(id)
}
if !ok {
Warnf("adding pack file to index %v\n", id)
} else if size != packSize {
Warnf("reindexing pack file %v with unexpected size %v instead of %v\n", id, packSize, size)
}
delete(packSizeFromIndex, id)
return nil
})
if err != nil {
return err
}
bar := newProgressMax(!globalOptions.Quiet, packs-uint64(len(ignorePacks)), "packs")
idx, invalidFiles, err := index.New(ctx, repo, ignorePacks, bar)
if err != nil {
return err
for id := range packSizeFromIndex {
// forget pack files that are referenced in the index but do not exist
// when rebuilding the index
removePacks.Insert(id)
Warnf("removing not found pack file %v\n", id)
}
if globalOptions.verbosity >= 2 {
if len(packSizeFromList) > 0 {
Verbosef("reading pack files\n")
bar := newProgressMax(!globalOptions.Quiet, uint64(len(packSizeFromList)), "packs")
invalidFiles, err := repo.CreateIndexFromPacks(ctx, packSizeFromList, bar)
bar.Done()
if err != nil {
return err
}
for _, id := range invalidFiles {
Printf("skipped incomplete pack file: %v\n", id)
Verboseff("skipped incomplete pack file: %v\n", id)
}
}
Verbosef("finding old index files\n")
var supersedes restic.IDs
err = repo.List(ctx, restic.IndexFile, func(id restic.ID, size int64) error {
supersedes = append(supersedes, id)
return nil
})
err = rebuildIndexFiles(gopts, repo, removePacks, obsoleteIndexes)
if err != nil {
return err
}
ids, err := idx.Save(ctx, repo, supersedes)
if err != nil {
return errors.Fatalf("unable to save index, last error was: %v", err)
}
Verbosef("saved new indexes as %v\n", ids)
Verbosef("remove %d old index files\n", len(supersedes))
err = DeleteFilesChecked(globalOptions, repo, restic.NewIDSet(supersedes...), restic.IndexFile)
if err != nil {
return errors.Fatalf("unable to remove an old index: %v\n", err)
}
Verbosef("done\n")
return nil
}

View File

@@ -117,7 +117,10 @@ func runRecover(gopts GlobalOptions) error {
ModTime: time.Now(),
ChangeTime: time.Now(),
}
tree.Insert(&node)
err = tree.Insert(&node)
if err != nil {
return err
}
}
treeID, err := repo.SaveTree(gopts.ctx, tree)

View File

@@ -191,14 +191,26 @@ func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
Verbosef("restoring %s to %s\n", res.Snapshot(), opts.Target)
err = res.RestoreTo(ctx, opts.Target)
if err == nil && opts.Verify {
if err != nil {
return err
}
if totalErrors > 0 {
return errors.Fatalf("There were %d errors\n", totalErrors)
}
if opts.Verify {
Verbosef("verifying files in %s\n", opts.Target)
var count int
count, err = res.VerifyFiles(ctx, opts.Target)
if err != nil {
return err
}
if totalErrors > 0 {
return errors.Fatalf("There were %d errors\n", totalErrors)
}
Verbosef("finished verifying %d files in %s\n", count, opts.Target)
}
if totalErrors > 0 {
Printf("There were %d errors\n", totalErrors)
}
return err
return nil
}

View File

@@ -47,7 +47,7 @@ func init() {
f := cmdSnapshots.Flags()
f.StringArrayVarP(&snapshotOptions.Hosts, "host", "H", nil, "only consider snapshots for this `host` (can be specified multiple times)")
f.Var(&snapshotOptions.Tags, "tag", "only consider snapshots which include this `taglist` (can be specified multiple times)")
f.Var(&snapshotOptions.Tags, "tag", "only consider snapshots which include this `taglist` in the format `tag[,tag,...]` (can be specified multiple times)")
f.StringArrayVar(&snapshotOptions.Paths, "path", nil, "only consider snapshots for this `path` (can be specified multiple times)")
f.BoolVarP(&snapshotOptions.Compact, "compact", "c", false, "use compact output format")
f.BoolVar(&snapshotOptions.Last, "last", false, "only show the last snapshot for each host and path")
@@ -243,7 +243,10 @@ func PrintSnapshots(stdout io.Writer, list restic.Snapshots, reasons []restic.Ke
}
}
tab.Write(stdout)
err := tab.Write(stdout)
if err != nil {
Warnf("error printing: %v\n", err)
}
}
// PrintSnapshotGroupHeader prints which group of the group-by option the

View File

@@ -11,7 +11,8 @@ import (
func TestEmptySnapshotGroupJSON(t *testing.T) {
for _, grouped := range []bool{false, true} {
var w strings.Builder
printSnapshotGroupJSON(&w, nil, grouped)
err := printSnapshotGroupJSON(&w, nil, grouped)
rtest.OK(t, err)
rtest.Equals(t, "[]", strings.TrimSpace(w.String()))
}

View File

@@ -166,7 +166,7 @@ func statsWalkSnapshot(ctx context.Context, snapshot *restic.Snapshot, repo rest
if statsOptions.countMode == countModeRawData {
// count just the sizes of unique blobs; we don't need to walk the tree
// ourselves in this case, since a nifty function does it for us
return restic.FindUsedBlobs(ctx, repo, *snapshot.Tree, stats.blobs)
return restic.FindUsedBlobs(ctx, repo, restic.IDs{*snapshot.Tree}, stats.blobs, nil)
}
err := walker.Walk(ctx, repo, *snapshot.Tree, restic.NewIDSet(), statsWalkTree(repo, stats))

View File

@@ -38,9 +38,9 @@ type TagOptions struct {
Hosts []string
Paths []string
Tags restic.TagLists
SetTags []string
AddTags []string
RemoveTags []string
SetTags restic.TagLists
AddTags restic.TagLists
RemoveTags restic.TagLists
}
var tagOptions TagOptions
@@ -49,9 +49,9 @@ func init() {
cmdRoot.AddCommand(cmdTag)
tagFlags := cmdTag.Flags()
tagFlags.StringSliceVar(&tagOptions.SetTags, "set", nil, "`tag` which will replace the existing tags (can be given multiple times)")
tagFlags.StringSliceVar(&tagOptions.AddTags, "add", nil, "`tag` which will be added to the existing tags (can be given multiple times)")
tagFlags.StringSliceVar(&tagOptions.RemoveTags, "remove", nil, "`tag` which will be removed from the existing tags (can be given multiple times)")
tagFlags.Var(&tagOptions.SetTags, "set", "`tags` which will replace the existing tags in the format `tag[,tag,...]` (can be given multiple times)")
tagFlags.Var(&tagOptions.AddTags, "add", "`tags` which will be added to the existing tags in the format `tag[,tag,...]` (can be given multiple times)")
tagFlags.Var(&tagOptions.RemoveTags, "remove", "`tags` which will be removed from the existing tags in the format `tag[,tag,...]` (can be given multiple times)")
tagFlags.StringArrayVarP(&tagOptions.Hosts, "host", "H", nil, "only consider snapshots for this `host`, when no snapshot ID is given (can be specified multiple times)")
tagFlags.Var(&tagOptions.Tags, "tag", "only consider snapshots which include this `taglist`, when no snapshot-ID is given")
@@ -130,7 +130,7 @@ func runTag(opts TagOptions, gopts GlobalOptions, args []string) error {
ctx, cancel := context.WithCancel(gopts.ctx)
defer cancel()
for sn := range FindFilteredSnapshots(ctx, repo, opts.Hosts, opts.Tags, opts.Paths, args) {
changed, err := changeTags(ctx, repo, sn, opts.SetTags, opts.AddTags, opts.RemoveTags)
changed, err := changeTags(ctx, repo, sn, opts.SetTags.Flatten(), opts.AddTags.Flatten(), opts.RemoveTags.Flatten())
if err != nil {
Warnf("unable to modify the tags for snapshot ID %q, ignoring: %v\n", sn.ID(), err)
continue

View File

@@ -9,7 +9,7 @@ import (
// DeleteFiles deletes the given fileList of fileType in parallel
// it will print a warning if there is an error, but continue deleting the remaining files
func DeleteFiles(gopts GlobalOptions, repo restic.Repository, fileList restic.IDSet, fileType restic.FileType) {
deleteFiles(gopts, true, repo, fileList, fileType)
_ = deleteFiles(gopts, true, repo, fileList, fileType)
}
// DeleteFilesChecked deletes the given fileList of fileType in parallel
@@ -33,8 +33,8 @@ func deleteFiles(gopts GlobalOptions, ignoreError bool, repo restic.Repository,
}()
bar := newProgressMax(!gopts.JSON && !gopts.Quiet, uint64(totalCount), "files deleted")
defer bar.Done()
wg, ctx := errgroup.WithContext(gopts.ctx)
bar.Start()
for i := 0; i < numDeleteWorkers; i++ {
wg.Go(func() error {
for id := range fileChan {
@@ -51,12 +51,11 @@ func deleteFiles(gopts GlobalOptions, ignoreError bool, repo restic.Repository,
if !gopts.JSON && gopts.verbosity > 2 {
Verbosef("removed %v\n", h)
}
bar.Report(restic.Stat{Blobs: 1})
bar.Add(1)
}
return nil
})
}
err := wg.Wait()
bar.Done()
return err
}

View File

@@ -180,7 +180,9 @@ func isDirExcludedByFile(dir, tagFilename, header string) bool {
Warnf("could not open exclusion tagfile: %v", err)
return false
}
defer f.Close()
defer func() {
_ = f.Close()
}()
buf := make([]byte, len(header))
_, err = io.ReadFull(f, buf)
// EOF is handled with a dedicated message, otherwise the warning were too cryptic
@@ -199,12 +201,17 @@ func isDirExcludedByFile(dir, tagFilename, header string) bool {
return true
}
// gatherDevices returns the set of unique device ids of the files and/or
// directory paths listed in "items".
func gatherDevices(items []string) (deviceMap map[string]uint64, err error) {
deviceMap = make(map[string]uint64)
for _, item := range items {
item, err = filepath.Abs(filepath.Clean(item))
// DeviceMap is used to track allowed source devices for backup. This is used to
// check for crossing mount points during backup (for --one-file-system). It
// maps the name of a source path to its device ID.
type DeviceMap map[string]uint64
// NewDeviceMap creates a new device map from the list of source paths.
func NewDeviceMap(allowedSourcePaths []string) (DeviceMap, error) {
deviceMap := make(map[string]uint64)
for _, item := range allowedSourcePaths {
item, err := filepath.Abs(filepath.Clean(item))
if err != nil {
return nil, err
}
@@ -213,30 +220,63 @@ func gatherDevices(items []string) (deviceMap map[string]uint64, err error) {
if err != nil {
return nil, err
}
id, err := fs.DeviceID(fi)
if err != nil {
return nil, err
}
deviceMap[item] = id
}
if len(deviceMap) == 0 {
return nil, errors.New("zero allowed devices")
}
return deviceMap, nil
}
// IsAllowed returns true if the path is located on an allowed device.
func (m DeviceMap) IsAllowed(item string, deviceID uint64) (bool, error) {
for dir := item; ; dir = filepath.Dir(dir) {
debug.Log("item %v, test dir %v", item, dir)
// find a parent directory that is on an allowed device (otherwise
// we would not traverse the directory at all)
allowedID, ok := m[dir]
if !ok {
if dir == filepath.Dir(dir) {
// arrived at root, no allowed device found. this should not happen.
break
}
continue
}
// if the item has a different device ID than the parent directory,
// we crossed a file system boundary
if allowedID != deviceID {
debug.Log("item %v (dir %v) on disallowed device %d", item, dir, deviceID)
return false, nil
}
// item is on allowed device, accept it
debug.Log("item %v allowed", item)
return true, nil
}
return false, fmt.Errorf("item %v (device ID %v) not found, deviceMap: %v", item, deviceID, m)
}
// rejectByDevice returns a RejectFunc that rejects files which are on a
// different file systems than the files/dirs in samples.
func rejectByDevice(samples []string) (RejectFunc, error) {
allowed, err := gatherDevices(samples)
deviceMap, err := NewDeviceMap(samples)
if err != nil {
return nil, err
}
debug.Log("allowed devices: %v\n", allowed)
debug.Log("allowed devices: %v\n", deviceMap)
return func(item string, fi os.FileInfo) bool {
item = filepath.Clean(item)
id, err := fs.DeviceID(fi)
if err != nil {
// This should never happen because gatherDevices() would have
@@ -244,26 +284,55 @@ func rejectByDevice(samples []string) (RejectFunc, error) {
panic(err)
}
for dir := item; ; dir = filepath.Dir(dir) {
debug.Log("item %v, test dir %v", item, dir)
allowedID, ok := allowed[dir]
if !ok {
if dir == filepath.Dir(dir) {
break
}
continue
}
if allowedID != id {
debug.Log("path %q on disallowed device %d", item, id)
return true
}
allowed, err := deviceMap.IsAllowed(filepath.Clean(item), id)
if err != nil {
// this should not happen
panic(fmt.Sprintf("error checking device ID of %v: %v", item, err))
}
if allowed {
// accept item
return false
}
panic(fmt.Sprintf("item %v, device id %v not found, allowedDevs: %v", item, id, allowed))
// reject everything except directories
if !fi.IsDir() {
return true
}
// special case: make sure we keep mountpoints (directories which
// contain a mounted file system). Test this by checking if the parent
// directory would be included.
parentDir := filepath.Dir(filepath.Clean(item))
parentFI, err := fs.Lstat(parentDir)
if err != nil {
debug.Log("item %v: error running lstat() on parent directory: %v", item, err)
// if in doubt, reject
return true
}
parentDeviceID, err := fs.DeviceID(parentFI)
if err != nil {
debug.Log("item %v: getting device ID of parent directory: %v", item, err)
// if in doubt, reject
return true
}
parentAllowed, err := deviceMap.IsAllowed(parentDir, parentDeviceID)
if err != nil {
debug.Log("item %v: error checking parent directory: %v", item, err)
// if in doubt, reject
return true
}
if parentAllowed {
// we found a mount point, so accept the directory
return false
}
// reject everything else
return true
}, nil
}

View File

@@ -318,3 +318,47 @@ func TestIsExcludedByFileSize(t *testing.T) {
}
}
}
func TestDeviceMap(t *testing.T) {
deviceMap := DeviceMap{
filepath.FromSlash("/"): 1,
filepath.FromSlash("/usr/local"): 5,
}
var tests = []struct {
item string
deviceID uint64
allowed bool
}{
{"/root", 1, true},
{"/usr", 1, true},
{"/proc", 2, false},
{"/proc/1234", 2, false},
{"/usr", 3, false},
{"/usr/share", 3, false},
{"/usr/local", 5, true},
{"/usr/local/foobar", 5, true},
{"/usr/local/foobar/submount", 23, false},
{"/usr/local/foobar/submount/file", 23, false},
{"/usr/local/foobar/outhersubmount", 1, false},
{"/usr/local/foobar/outhersubmount/otherfile", 1, false},
}
for _, test := range tests {
t.Run("", func(t *testing.T) {
res, err := deviceMap.IsAllowed(filepath.FromSlash(test.item), test.deviceID)
if err != nil {
t.Fatal(err)
}
if res != test.allowed {
t.Fatalf("wrong result returned by IsAllowed(%v): want %v, got %v", test.item, test.allowed, res)
}
})
}
}

View File

@@ -39,7 +39,7 @@ import (
"golang.org/x/crypto/ssh/terminal"
)
var version = "0.11.0"
var version = "0.12.0"
// TimeFormat is the format used for all timestamps printed by restic.
const TimeFormat = "2006-01-02 15:04:05"
@@ -71,7 +71,7 @@ type GlobalOptions struct {
stdout io.Writer
stderr io.Writer
backendTestHook backendWrapper
backendTestHook, backendInnerTestHook backendWrapper
// verbosity is set as follows:
// 0 means: don't print any messages except errors, this is used when --quiet is specified
@@ -231,6 +231,13 @@ func Verbosef(format string, args ...interface{}) {
}
}
// Verboseff calls Printf to write the message when the verbosity is >= 2
func Verboseff(format string, args ...interface{}) {
if globalOptions.verbosity >= 2 {
Printf(format, args...)
}
}
// PrintProgress wraps fmt.Printf to handle the difference in writing progress
// information to terminals and non-terminal stdout
func PrintProgress(format string, args ...interface{}) {
@@ -688,12 +695,8 @@ func open(s string, gopts GlobalOptions, opts options.Options) (restic.Backend,
switch loc.Scheme {
case "local":
be, err = local.Open(globalOptions.ctx, cfg.(local.Config))
// wrap the backend in a LimitBackend so that the throughput is limited
be = limiter.LimitBackend(be, lim)
case "sftp":
be, err = sftp.Open(globalOptions.ctx, cfg.(sftp.Config))
// wrap the backend in a LimitBackend so that the throughput is limited
be = limiter.LimitBackend(be, lim)
case "s3":
be, err = s3.Open(globalOptions.ctx, cfg.(s3.Config), rt)
case "gs":
@@ -717,6 +720,19 @@ func open(s string, gopts GlobalOptions, opts options.Options) (restic.Backend,
return nil, errors.Fatalf("unable to open repo at %v: %v", location.StripPassword(s), err)
}
// wrap backend if a test specified an inner hook
if gopts.backendInnerTestHook != nil {
be, err = gopts.backendInnerTestHook(be)
if err != nil {
return nil, err
}
}
if loc.Scheme == "local" || loc.Scheme == "sftp" {
// wrap the backend in a LimitBackend so that the throughput is limited
be = limiter.LimitBackend(be, lim)
}
// check if config is there
fi, err := be.Stat(globalOptions.ctx, restic.Handle{Type: restic.ConfigFile})
if err != nil {

View File

@@ -54,7 +54,7 @@ func TestReadRepo(t *testing.T) {
var opts3 GlobalOptions
opts3.RepositoryFile = foo + "-invalid"
repo, err = ReadRepo(opts3)
_, err = ReadRepo(opts3)
if err == nil {
t.Fatal("must not read repository path from invalid file path")
}

View File

@@ -1,7 +1,4 @@
// +build !netbsd
// +build !openbsd
// +build !solaris
// +build !windows
// +build darwin freebsd linux
package main
@@ -163,9 +160,6 @@ func TestMount(t *testing.T) {
repo, err := OpenRepository(env.gopts)
rtest.OK(t, err)
// We remove the mountpoint now to check that cmdMount creates it
rtest.RemoveAll(t, env.mountpoint)
checkSnapshots(t, env.gopts, repo, env.mountpoint, env.repo, []restic.ID{}, 0)
rtest.SetupTarTestFixture(t, env.testdata, filepath.Join("testdata", "backup-data.tar.gz"))

View File

@@ -166,7 +166,7 @@ func testRunDiffOutput(gopts GlobalOptions, firstSnapshotID string, secondSnapsh
ShowMetadata: false,
}
err := runDiff(opts, gopts, []string{firstSnapshotID, secondSnapshotID})
return string(buf.Bytes()), err
return buf.String(), err
}
func testRunRebuildIndex(t testing.TB, gopts GlobalOptions) {
@@ -175,7 +175,7 @@ func testRunRebuildIndex(t testing.TB, gopts GlobalOptions) {
globalOptions.stdout = os.Stdout
}()
rtest.OK(t, runRebuildIndex(gopts))
rtest.OK(t, runRebuildIndex(RebuildIndexOptions{}, gopts))
}
func testRunLs(t testing.TB, gopts GlobalOptions, snapshotID string) []string {
@@ -270,8 +270,8 @@ func testRunForgetJSON(t testing.TB, gopts GlobalOptions, args ...string) {
"Expected 2 snapshots to be removed, got %v", len(forgets[0].Remove))
}
func testRunPrune(t testing.TB, gopts GlobalOptions) {
rtest.OK(t, runPrune(gopts))
func testRunPrune(t testing.TB, gopts GlobalOptions, opts PruneOptions) {
rtest.OK(t, runPrune(opts, gopts))
}
func testSetupBackupData(t testing.TB, env *testEnvironment) string {
@@ -566,9 +566,9 @@ func TestBackupErrors(t *testing.T) {
// Assume failure
inaccessibleFile := filepath.Join(env.testdata, "0", "0", "9", "0")
os.Chmod(inaccessibleFile, 0000)
rtest.OK(t, os.Chmod(inaccessibleFile, 0000))
defer func() {
os.Chmod(inaccessibleFile, 0644)
rtest.OK(t, os.Chmod(inaccessibleFile, 0644))
}()
opts := BackupOptions{}
gopts := env.gopts
@@ -657,16 +657,24 @@ func TestBackupTags(t *testing.T) {
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
testRunCheck(t, env.gopts)
newest, _ := testRunSnapshots(t, env.gopts)
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
if newest == nil {
t.Fatal("expected a backup, got nil")
}
rtest.Assert(t, len(newest.Tags) == 0,
"expected no tags, got %v", newest.Tags)
parent := newest
opts.Tags = []string{"NL"}
opts.Tags = restic.TagLists{[]string{"NL"}}
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
testRunCheck(t, env.gopts)
newest, _ = testRunSnapshots(t, env.gopts)
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
if newest == nil {
t.Fatal("expected a backup, got nil")
}
rtest.Assert(t, len(newest.Tags) == 1 && newest.Tags[0] == "NL",
"expected one NL tag, got %v", newest.Tags)
// Tagged backup should have untagged backup as parent.
@@ -833,48 +841,59 @@ func TestTag(t *testing.T) {
testRunBackup(t, "", []string{env.testdata}, BackupOptions{}, env.gopts)
testRunCheck(t, env.gopts)
newest, _ := testRunSnapshots(t, env.gopts)
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
if newest == nil {
t.Fatal("expected a new backup, got nil")
}
rtest.Assert(t, len(newest.Tags) == 0,
"expected no tags, got %v", newest.Tags)
rtest.Assert(t, newest.Original == nil,
"expected original ID to be nil, got %v", newest.Original)
originalID := *newest.ID
testRunTag(t, TagOptions{SetTags: []string{"NL"}}, env.gopts)
testRunTag(t, TagOptions{SetTags: restic.TagLists{[]string{"NL"}}}, env.gopts)
testRunCheck(t, env.gopts)
newest, _ = testRunSnapshots(t, env.gopts)
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
if newest == nil {
t.Fatal("expected a backup, got nil")
}
rtest.Assert(t, len(newest.Tags) == 1 && newest.Tags[0] == "NL",
"set failed, expected one NL tag, got %v", newest.Tags)
rtest.Assert(t, newest.Original != nil, "expected original snapshot id, got nil")
rtest.Assert(t, *newest.Original == originalID,
"expected original ID to be set to the first snapshot id")
testRunTag(t, TagOptions{AddTags: []string{"CH"}}, env.gopts)
testRunTag(t, TagOptions{AddTags: restic.TagLists{[]string{"CH"}}}, env.gopts)
testRunCheck(t, env.gopts)
newest, _ = testRunSnapshots(t, env.gopts)
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
if newest == nil {
t.Fatal("expected a backup, got nil")
}
rtest.Assert(t, len(newest.Tags) == 2 && newest.Tags[0] == "NL" && newest.Tags[1] == "CH",
"add failed, expected CH,NL tags, got %v", newest.Tags)
rtest.Assert(t, newest.Original != nil, "expected original snapshot id, got nil")
rtest.Assert(t, *newest.Original == originalID,
"expected original ID to be set to the first snapshot id")
testRunTag(t, TagOptions{RemoveTags: []string{"NL"}}, env.gopts)
testRunTag(t, TagOptions{RemoveTags: restic.TagLists{[]string{"NL"}}}, env.gopts)
testRunCheck(t, env.gopts)
newest, _ = testRunSnapshots(t, env.gopts)
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
if newest == nil {
t.Fatal("expected a backup, got nil")
}
rtest.Assert(t, len(newest.Tags) == 1 && newest.Tags[0] == "CH",
"remove failed, expected one CH tag, got %v", newest.Tags)
rtest.Assert(t, newest.Original != nil, "expected original snapshot id, got nil")
rtest.Assert(t, *newest.Original == originalID,
"expected original ID to be set to the first snapshot id")
testRunTag(t, TagOptions{AddTags: []string{"US", "RU"}}, env.gopts)
testRunTag(t, TagOptions{RemoveTags: []string{"CH", "US", "RU"}}, env.gopts)
testRunTag(t, TagOptions{AddTags: restic.TagLists{[]string{"US", "RU"}}}, env.gopts)
testRunTag(t, TagOptions{RemoveTags: restic.TagLists{[]string{"CH", "US", "RU"}}}, env.gopts)
testRunCheck(t, env.gopts)
newest, _ = testRunSnapshots(t, env.gopts)
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
if newest == nil {
t.Fatal("expected a backup, got nil")
}
rtest.Assert(t, len(newest.Tags) == 0,
"expected no tags, got %v", newest.Tags)
rtest.Assert(t, newest.Original != nil, "expected original snapshot id, got nil")
@@ -882,10 +901,12 @@ func TestTag(t *testing.T) {
"expected original ID to be set to the first snapshot id")
// Check special case of removing all tags.
testRunTag(t, TagOptions{SetTags: []string{""}}, env.gopts)
testRunTag(t, TagOptions{SetTags: restic.TagLists{[]string{""}}}, env.gopts)
testRunCheck(t, env.gopts)
newest, _ = testRunSnapshots(t, env.gopts)
rtest.Assert(t, newest != nil, "expected a new backup, got nil")
if newest == nil {
t.Fatal("expected a backup, got nil")
}
rtest.Assert(t, len(newest.Tags) == 0,
"expected no tags, got %v", newest.Tags)
rtest.Assert(t, newest.Original != nil, "expected original snapshot id, got nil")
@@ -933,7 +954,7 @@ func testRunKeyAddNewKeyUserHost(t testing.TB, gopts GlobalOptions) {
keyHostname = ""
}()
cmdKey.Flags().Parse([]string{"--user=john", "--host=example.com"})
rtest.OK(t, cmdKey.Flags().Parse([]string{"--user=john", "--host=example.com"}))
t.Log("adding key for john@example.com")
rtest.OK(t, runKey(gopts, []string{"add"}))
@@ -1106,7 +1127,7 @@ func TestRestoreLatest(t *testing.T) {
testRunBackup(t, "", []string{filepath.Base(env.testdata)}, opts, env.gopts)
testRunCheck(t, env.gopts)
os.Remove(p)
rtest.OK(t, os.Remove(p))
rtest.OK(t, appendRandomData(p, 101))
testRunBackup(t, "", []string{filepath.Base(env.testdata)}, opts, env.gopts)
testRunCheck(t, env.gopts)
@@ -1351,7 +1372,7 @@ func TestRebuildIndexFailsOnAppendOnly(t *testing.T) {
env.gopts.backendTestHook = func(r restic.Backend) (restic.Backend, error) {
return &appendOnlyBackend{r}, nil
}
err := runRebuildIndex(env.gopts)
err := runRebuildIndex(RebuildIndexOptions{}, env.gopts)
if err == nil {
t.Error("expected rebuildIndex to fail")
}
@@ -1386,6 +1407,32 @@ func TestCheckRestoreNoLock(t *testing.T) {
}
func TestPrune(t *testing.T) {
t.Run("0", func(t *testing.T) {
opts := PruneOptions{MaxUnused: "0%"}
checkOpts := CheckOptions{ReadData: true, CheckUnused: true}
testPrune(t, opts, checkOpts)
})
t.Run("50", func(t *testing.T) {
opts := PruneOptions{MaxUnused: "50%"}
checkOpts := CheckOptions{ReadData: true}
testPrune(t, opts, checkOpts)
})
t.Run("unlimited", func(t *testing.T) {
opts := PruneOptions{MaxUnused: "unlimited"}
checkOpts := CheckOptions{ReadData: true}
testPrune(t, opts, checkOpts)
})
t.Run("CachableOnly", func(t *testing.T) {
opts := PruneOptions{MaxUnused: "5%", RepackCachableOnly: true}
checkOpts := CheckOptions{ReadData: true}
testPrune(t, opts, checkOpts)
})
}
func testPrune(t *testing.T, pruneOpts PruneOptions, checkOpts CheckOptions) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
@@ -1406,10 +1453,12 @@ func TestPrune(t *testing.T) {
testRunForgetJSON(t, env.gopts)
testRunForget(t, env.gopts, firstSnapshot[0].String())
testRunPrune(t, env.gopts)
testRunCheck(t, env.gopts)
testRunPrune(t, env.gopts, pruneOpts)
rtest.OK(t, runCheck(checkOpts, env.gopts, nil))
}
var pruneDefaultOptions = PruneOptions{MaxUnused: "5%"}
func listPacks(gopts GlobalOptions, t *testing.T) restic.IDSet {
r, err := OpenRepository(gopts)
rtest.OK(t, err)
@@ -1452,14 +1501,8 @@ func TestPruneWithDamagedRepository(t *testing.T) {
"expected one snapshot, got %v", snapshotIDs)
// prune should fail
err := runPrune(env.gopts)
if err == nil {
t.Fatalf("expected prune to fail")
}
if !strings.Contains(err.Error(), "blobs seem to be missing") {
t.Fatalf("did not find hint for missing blobs")
}
t.Log(err)
rtest.Assert(t, runPrune(pruneDefaultOptions, env.gopts) == errorPacksMissing,
"prune should have reported index not complete error")
}
// Test repos for edge cases
@@ -1469,37 +1512,43 @@ func TestEdgeCaseRepos(t *testing.T) {
// repo where index is completely missing
// => check and prune should fail
t.Run("no-index", func(t *testing.T) {
testEdgeCaseRepo(t, "repo-index-missing.tar.gz", opts, false, false)
testEdgeCaseRepo(t, "repo-index-missing.tar.gz", opts, pruneDefaultOptions, false, false)
})
// repo where an existing and used blob is missing from the index
// => check should fail, prune should heal this
// => check and prune should fail
t.Run("index-missing-blob", func(t *testing.T) {
testEdgeCaseRepo(t, "repo-index-missing-blob.tar.gz", opts, false, true)
testEdgeCaseRepo(t, "repo-index-missing-blob.tar.gz", opts, pruneDefaultOptions, false, false)
})
// repo where a blob is missing
// => check and prune should fail
t.Run("no-data", func(t *testing.T) {
testEdgeCaseRepo(t, "repo-data-missing.tar.gz", opts, false, false)
t.Run("missing-data", func(t *testing.T) {
testEdgeCaseRepo(t, "repo-data-missing.tar.gz", opts, pruneDefaultOptions, false, false)
})
// repo where blobs which are not needed are missing or in invalid pack files
// => check should fail and prune should repair this
t.Run("missing-unused-data", func(t *testing.T) {
testEdgeCaseRepo(t, "repo-unused-data-missing.tar.gz", opts, pruneDefaultOptions, false, true)
})
// repo where data exists that is not referenced
// => check and prune should fully work
t.Run("unreferenced-data", func(t *testing.T) {
testEdgeCaseRepo(t, "repo-unreferenced-data.tar.gz", opts, true, true)
testEdgeCaseRepo(t, "repo-unreferenced-data.tar.gz", opts, pruneDefaultOptions, true, true)
})
// repo where an obsolete index still exists
// => check and prune should fully work
t.Run("obsolete-index", func(t *testing.T) {
testEdgeCaseRepo(t, "repo-obsolete-index.tar.gz", opts, true, true)
testEdgeCaseRepo(t, "repo-obsolete-index.tar.gz", opts, pruneDefaultOptions, true, true)
})
// repo which contains mixed (data/tree) packs
// => check and prune should fully work
t.Run("mixed-packs", func(t *testing.T) {
testEdgeCaseRepo(t, "repo-mixed.tar.gz", opts, true, true)
testEdgeCaseRepo(t, "repo-mixed.tar.gz", opts, pruneDefaultOptions, true, true)
})
// repo which contains duplicate blobs
@@ -1510,11 +1559,11 @@ func TestEdgeCaseRepos(t *testing.T) {
CheckUnused: true,
}
t.Run("duplicates", func(t *testing.T) {
testEdgeCaseRepo(t, "repo-duplicates.tar.gz", opts, false, true)
testEdgeCaseRepo(t, "repo-duplicates.tar.gz", opts, pruneDefaultOptions, false, true)
})
}
func testEdgeCaseRepo(t *testing.T, tarfile string, options CheckOptions, checkOK, pruneOK bool) {
func testEdgeCaseRepo(t *testing.T, tarfile string, optionsCheck CheckOptions, optionsPrune PruneOptions, checkOK, pruneOK bool) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
@@ -1524,19 +1573,78 @@ func testEdgeCaseRepo(t *testing.T, tarfile string, options CheckOptions, checkO
if checkOK {
testRunCheck(t, env.gopts)
} else {
rtest.Assert(t, runCheck(options, env.gopts, nil) != nil,
rtest.Assert(t, runCheck(optionsCheck, env.gopts, nil) != nil,
"check should have reported an error")
}
if pruneOK {
testRunPrune(t, env.gopts)
testRunPrune(t, env.gopts, optionsPrune)
testRunCheck(t, env.gopts)
} else {
rtest.Assert(t, runPrune(env.gopts) != nil,
rtest.Assert(t, runPrune(optionsPrune, env.gopts) != nil,
"prune should have reported an error")
}
}
// a listOnceBackend only allows listing once per filetype
// listing filetypes more than once may cause problems with eventually consistent
// backends (like e.g. AWS S3) as the second listing may be inconsistent to what
// is expected by the first listing + some operations.
type listOnceBackend struct {
restic.Backend
listedFileType map[restic.FileType]bool
}
func newListOnceBackend(be restic.Backend) *listOnceBackend {
return &listOnceBackend{
Backend: be,
listedFileType: make(map[restic.FileType]bool),
}
}
func (be *listOnceBackend) List(ctx context.Context, t restic.FileType, fn func(restic.FileInfo) error) error {
if t != restic.LockFile && be.listedFileType[t] {
return errors.Errorf("tried listing type %v the second time", t)
}
be.listedFileType[t] = true
return be.Backend.List(ctx, t, fn)
}
func TestListOnce(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
env.gopts.backendTestHook = func(r restic.Backend) (restic.Backend, error) {
return newListOnceBackend(r), nil
}
pruneOpts := PruneOptions{MaxUnused: "0"}
checkOpts := CheckOptions{ReadData: true, CheckUnused: true}
testSetupBackupData(t, env)
opts := BackupOptions{}
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9")}, opts, env.gopts)
firstSnapshot := testRunList(t, "snapshots", env.gopts)
rtest.Assert(t, len(firstSnapshot) == 1,
"expected one snapshot, got %v", firstSnapshot)
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9", "2")}, opts, env.gopts)
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9", "3")}, opts, env.gopts)
snapshotIDs := testRunList(t, "snapshots", env.gopts)
rtest.Assert(t, len(snapshotIDs) == 3,
"expected 3 snapshot, got %v", snapshotIDs)
testRunForgetJSON(t, env.gopts)
testRunForget(t, env.gopts, firstSnapshot[0].String())
testRunPrune(t, env.gopts, pruneOpts)
rtest.OK(t, runCheck(checkOpts, env.gopts, nil))
rtest.OK(t, runRebuildIndex(RebuildIndexOptions{}, env.gopts))
rtest.OK(t, runRebuildIndex(RebuildIndexOptions{ReadAllPacks: true}, env.gopts))
}
func TestHardLink(t *testing.T) {
// this test assumes a test set with a single directory containing hard linked files
env, cleanup := withTestEnvironment(t)
@@ -1660,16 +1768,35 @@ func copyFile(dst string, src string) error {
if err != nil {
return err
}
defer srcFile.Close()
dstFile, err := os.Create(dst)
if err != nil {
// ignore subsequent errors
_ = srcFile.Close()
return err
}
defer dstFile.Close()
_, err = io.Copy(dstFile, srcFile)
return err
if err != nil {
// ignore subsequent errors
_ = srcFile.Close()
_ = dstFile.Close()
return err
}
err = srcFile.Close()
if err != nil {
// ignore subsequent errors
_ = dstFile.Close()
return err
}
err = dstFile.Close()
if err != nil {
return err
}
return nil
}
var diffOutputRegexPatterns = []string{
@@ -1742,3 +1869,53 @@ func TestDiff(t *testing.T) {
rtest.Assert(t, r.MatchString(out), "expected pattern %v in output, got\n%v", pattern, out)
}
}
type writeToOnly struct {
rd io.Reader
}
func (r *writeToOnly) Read(p []byte) (n int, err error) {
return 0, fmt.Errorf("should have called WriteTo instead")
}
func (r *writeToOnly) WriteTo(w io.Writer) (int64, error) {
return io.Copy(w, r.rd)
}
type onlyLoadWithWriteToBackend struct {
restic.Backend
}
func (be *onlyLoadWithWriteToBackend) Load(ctx context.Context, h restic.Handle,
length int, offset int64, fn func(rd io.Reader) error) error {
return be.Backend.Load(ctx, h, length, offset, func(rd io.Reader) error {
return fn(&writeToOnly{rd: rd})
})
}
func TestBackendLoadWriteTo(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
// setup backend which only works if it's WriteTo method is correctly propagated upwards
env.gopts.backendInnerTestHook = func(r restic.Backend) (restic.Backend, error) {
return &onlyLoadWithWriteToBackend{Backend: r}, nil
}
testSetupBackupData(t, env)
// add some data, but make sure that it isn't cached during upload
opts := BackupOptions{}
env.gopts.NoCache = true
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9")}, opts, env.gopts)
// loading snapshots must still work
env.gopts.NoCache = false
firstSnapshot := testRunList(t, "snapshots", env.gopts)
rtest.Assert(t, len(firstSnapshot) == 1,
"expected one snapshot, got %v", firstSnapshot)
// test readData using the hashing.Reader
testRunCheck(t, env.gopts)
}

View File

@@ -85,9 +85,9 @@ func refreshLocks(wg *sync.WaitGroup, done <-chan struct{}) {
}
}
func unlockRepo(lock *restic.Lock) error {
func unlockRepo(lock *restic.Lock) {
if lock == nil {
return nil
return
}
globalLocks.Lock()
@@ -99,18 +99,17 @@ func unlockRepo(lock *restic.Lock) error {
debug.Log("unlocking repository with lock %v", lock)
if err := lock.Unlock(); err != nil {
debug.Log("error while unlocking: %v", err)
return err
Warnf("error while unlocking: %v", err)
return
}
// remove the lock from the list of locks
globalLocks.locks = append(globalLocks.locks[:i], globalLocks.locks[i+1:]...)
return nil
return
}
}
debug.Log("unable to find lock %v in the global list of locks, ignoring", lock)
return nil
}
func unlockAll() error {

View File

@@ -32,7 +32,7 @@ directories in an encrypted repository stored on different backends.
PersistentPreRunE: func(c *cobra.Command, args []string) error {
// set verbosity, default is one
globalOptions.verbosity = 1
if globalOptions.Quiet && (globalOptions.Verbose > 1) {
if globalOptions.Quiet && globalOptions.Verbose > 0 {
return errors.Fatal("--quiet and --verbose cannot be specified at the same time")
}

View File

@@ -2,35 +2,53 @@ package main
import (
"fmt"
"os"
"strconv"
"time"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui/progress"
)
// newProgressMax returns a progress that counts blobs.
func newProgressMax(show bool, max uint64, description string) *restic.Progress {
// calculateProgressInterval returns the interval configured via RESTIC_PROGRESS_FPS
// or if unset returns an interval for 60fps on interactive terminals and 0 (=disabled)
// for non-interactive terminals
func calculateProgressInterval() time.Duration {
interval := time.Second / 60
fps, err := strconv.ParseFloat(os.Getenv("RESTIC_PROGRESS_FPS"), 64)
if err == nil && fps > 0 {
if fps > 60 {
fps = 60
}
interval = time.Duration(float64(time.Second) / fps)
} else if !stdoutIsTerminal() {
interval = 0
}
return interval
}
// newProgressMax returns a progress.Counter that prints to stdout.
func newProgressMax(show bool, max uint64, description string) *progress.Counter {
if !show {
return nil
}
interval := calculateProgressInterval()
p := restic.NewProgress()
p.OnUpdate = func(s restic.Stat, d time.Duration, ticker bool) {
status := fmt.Sprintf("[%s] %s %d / %d %s",
formatDuration(d),
formatPercent(s.Blobs, max),
s.Blobs, max, description)
return progress.New(interval, max, func(v uint64, max uint64, d time.Duration, final bool) {
var status string
if max == 0 {
status = fmt.Sprintf("[%s] %d %s", formatDuration(d), v, description)
} else {
status = fmt.Sprintf("[%s] %s %d / %d %s",
formatDuration(d), formatPercent(v, max), v, max, description)
}
if w := stdoutTerminalWidth(); w > 0 {
status = shortenStatus(w, status)
}
PrintProgress("%s", status)
}
p.OnDone = func(s restic.Stat, d time.Duration, ticker bool) {
fmt.Printf("\n")
}
return p
if final {
fmt.Print("\n")
}
})
}

Binary file not shown.

View File

@@ -238,6 +238,20 @@ after the bucket name like this:
For an S3-compatible server that is not Amazon (like Minio, see below),
or is only available via HTTP, you can specify the URL to the server
like this: ``s3:http://server:port/bucket_name``.
.. note:: restic expects `path-style URLs <https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro>`__
like for example ``s3.us-west-2.amazonaws.com/bucket_name``.
Virtual-hostedstyle URLs like ``bucket_name.s3.us-west-2.amazonaws.com``,
where the bucket name is part of the hostname are not supported. These must
be converted to path-style URLs instead, for example ``s3.us-west-2.amazonaws.com/bucket_name``.
.. note:: Certain S3-compatible servers do not properly implement the
``ListObjectsV2`` API, most notably Ceph versions before v14.2.5. On these
backends, as a temporary workaround, you can provide the
``-o s3.list-objects-v1=true`` option to use the older
``ListObjects`` API instead. This option may be removed in future
versions of restic.
Minio Server
************
@@ -299,6 +313,46 @@ this command.
Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is irrecoverably lost.
Alibaba Cloud (Aliyun) Object Storage System (OSS)
**************************************************
`Alibaba OSS <https://www.alibabacloud.com/product/oss/>`__ is an
encrypted, secure, cost-effective, and easy-to-use object storage
service that enables you to store, back up, and archive large amounts
of data in the cloud.
Alibaba OSS is S3 compatible so it can be used as a storage provider
for a restic repository with a couple of extra parameters.
- Determine the correct `Alibaba OSS region endpoint <https://www.alibabacloud.com/help/doc-detail/31837.htm>`__ - this will be something like ``oss-eu-west-1.aliyuncs.com``
- You'll need the region name too - this will be something like ``oss-eu-west-1``
You must first setup the following environment variables with the
credentials of your Alibaba OSS account.
.. code-block:: console
$ export AWS_ACCESS_KEY_ID=<YOUR-OSS-ACCESS-KEY-ID>
$ export AWS_SECRET_ACCESS_KEY=<YOUR-OSS-SECRET-ACCESS-KEY>
Now you can easily initialize restic to use Alibaba OSS as a backend with
this command.
.. code-block:: console
$ ./restic -o s3.bucket-lookup=dns -o s3.region=<OSS-REGION> -r s3:https://<OSS-ENDPOINT>/<OSS-BUCKET-NAME> init
enter password for new backend:
enter password again:
created restic backend xxxxxxxxxx at s3:https://<OSS-ENDPOINT>/<OSS-BUCKET-NAME>
Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is irrecoverably lost.
For example with an actual endpoint:
.. code-block:: console
$ restic -o s3.bucket-lookup=dns -o s3.region=oss-eu-west-1 -r s3:https://oss-eu-west-1.aliyuncs.com/bucketname init
OpenStack Swift
***************
@@ -326,10 +380,14 @@ the naming convention of those variables follows the official Python Swift clien
$ export OS_AUTH_URL=<MY_AUTH_URL>
$ export OS_REGION_NAME=<MY_REGION_NAME>
$ export OS_USERNAME=<MY_USERNAME>
$ export OS_USER_ID=<MY_USER_ID>
$ export OS_PASSWORD=<MY_PASSWORD>
$ export OS_USER_DOMAIN_NAME=<MY_DOMAIN_NAME>
$ export OS_USER_DOMAIN_ID=<MY_DOMAIN_ID>
$ export OS_PROJECT_NAME=<MY_PROJECT_NAME>
$ export OS_PROJECT_DOMAIN_NAME=<MY_PROJECT_DOMAIN_NAME>
$ export OS_PROJECT_DOMAIN_ID=<MY_PROJECT_DOMAIN_ID>
$ export OS_TRUST_ID=<MY_TRUST_ID>
# For keystone v3 application credential authentication (application credential id)
$ export OS_AUTH_URL=<MY_AUTH_URL>
@@ -552,10 +610,9 @@ For debugging rclone, you can set the environment variable ``RCLONE_VERBOSE=2``.
The rclone backend has two additional options:
* ``-o rclone.program`` specifies the path to rclone, the default value is just ``rclone``
* ``-o rclone.args`` allows setting the arguments passed to rclone, by default this is ``serve restic --stdio --b2-hard-delete --drive-use-trash=false``
* ``-o rclone.args`` allows setting the arguments passed to rclone, by default this is ``serve restic --stdio --b2-hard-delete``
The reason for the two last parameters (``--b2-hard-delete`` and
``--drive-use-trash=false``) can be found in the corresponding GitHub `issue #1657`_.
The reason for the ``--b2-hard-delete`` parameters can be found in the corresponding GitHub `issue #1657`_.
In order to start rclone, restic will build a list of arguments by joining the
following lists (in this order): ``rclone.program``, ``rclone.args`` and as the
@@ -581,7 +638,17 @@ rclone e.g. via SSH on a server, for example:
.. code-block:: console
$ restic -o rclone.program="ssh user@host rclone" -r rclone:b2:foo/bar
$ restic -o rclone.program="ssh user@remotehost rclone" -r rclone:b2:foo/bar
With these options, restic works with local files. It uses rclone and
credentials stored on ``remotehost`` to communicate with B2. All data (except
credentials) is encrypted/decrypted locally, then sent/received via
``remotehost`` to/from B2.
A more advanced version of this setup forbids specific hosts from removing
files in a repository. See the `blog post by Simon Ruderich
<https://ruderich.org/simon/notes/append-only-backups-with-restic-and-rclone>`_
for details.
The rclone command may also be hard-coded in the SSH configuration or the
user's public key, in this case it may be sufficient to just start the SSH

View File

@@ -131,24 +131,62 @@ restic encounters:
In fact several hosts may use the same repository to backup directories
and files leading to a greater de-duplication.
Please be aware that when you backup different directories (or the
directories to be saved have a variable name component like a
time/date), restic always needs to read all files and only afterwards
can compute which parts of the files need to be saved. When you backup
the same directory again (maybe with new or changed files) restic will
find the old snapshot in the repo and by default only reads those files
that are new or have been modified since the last snapshot. This is
decided based on the following attributes of the file in the file system:
* Type (file, symlink, or directory?)
* Modification time
* Size
* Inode number (internal number used to reference a file in a file system)
Now is a good time to run ``restic check`` to verify that all data
is properly stored in the repository. You should run this command regularly
to make sure the internal structure of the repository is free of errors.
File change detection
*********************
When restic encounters a file that has already been backed up, whether in the
current backup or a previous one, it makes sure the file's contents are only
stored once in the repository. To do so, it normally has to scan the entire
contents of every file. Because this can be very expensive, restic also uses a
change detection rule based on file metadata to determine whether a file is
likely unchanged since a previous backup. If it is, the file is not scanned
again.
Change detection is only performed for regular files (not special files,
symlinks or directories) that have the exact same path as they did in a
previous backup of the same location. If a file or one of its containing
directories was renamed, it is considered a different file and its entire
contents will be scanned again.
Metadata changes (permissions, ownership, etc.) are always included in the
backup, even if file contents are considered unchanged.
On **Unix** (including Linux and Mac), given that a file lives at the same
location as a file in a previous backup, the following file metadata
attributes have to match for its contents to be presumed unchanged:
* Modification timestamp (mtime).
* Metadata change timestamp (ctime).
* File size.
* Inode number (internal number used to reference a file in a filesystem).
The reason for requiring both mtime and ctime to match is that Unix programs
can freely change mtime (and some do). In such cases, a ctime change may be
the only hint that a file did change.
The following ``restic backup`` command line flags modify the change detection
rules:
* ``--force``: turn off change detection and rescan all files.
* ``--ignore-ctime``: require mtime to match, but allow ctime to differ.
* ``--ignore-inode``: require mtime to match, but allow inode number
and ctime to differ.
The option ``--ignore-inode`` exists to support FUSE-based filesystems and
pCloud, which do not assign stable inodes to files.
Note that the device id of the containing mount point is never taken into
account. Device numbers are not stable for removable devices and ZFS snapshots.
If you want to force a re-scan in such a case, you can change the mountpoint.
On **Windows**, a file is considered unchanged when its path and modification
time match, and only ``--force`` has any effect. The other options are
recognized but ignored.
Excluding Files
***************
@@ -276,36 +314,56 @@ suffix the size value with one of ``k``/``K`` for kilobytes, ``m``/``M`` for meg
Including Files
***************
By using the ``--files-from`` option you can read the files you want to back
up from one or more files. This is especially useful if a lot of files have
to be backed up that are not in the same folder or are maybe pre-filtered by
other software.
The options ``--files-from``, ``--files-from-verbatim`` and ``--files-from-raw``
allow you to list files that should be backed up in a file, rather than on the
command line. This is useful when a lot of files have to be backed up that are
not in the same folder.
For example maybe you want to backup files which have a name that matches a
certain pattern:
The argument passed to ``--files-from`` must be the name of a text file that
contains one pattern per line. The file must be encoded as UTF-8, or UTF-16
with a byte-order mark. Leading and trailing whitespace is removed from the
patterns. Empty lines and lines starting with a ``#`` are ignored.
The patterns are expanded, when the file is read, by the Go function
`filepath.Glob <https://golang.org/pkg/path/filepath/#Glob>`__.
The option ``--files-from-verbatim`` has the same behavior as ``--files-from``,
except that it contains literal filenames. It does expand patterns; filenames
are listed verbatim. Lines starting with a ``#`` are not ignored; leading and
trailing whitespace is not trimmed off. Empty lines are still allowed, so that
files can be grouped.
``--files-from-raw`` is a third variant that requires filenames to be terminated
by a zero byte (the NUL character), so that it can even handle filenames that
contain newlines or are not encoded as UTF-8 (except on Windows, where the
listed filenames must still be encoded in UTF-8).
This option is the safest choice when generating filename lists from a script.
Its file format is the output format generated by GNU find's ``-print0`` option.
All three arguments interpret the argument ``-`` as standard input.
In all cases, paths may be absolute or relative to ``restic backup``'s
working directory.
For example, maybe you want to backup files which have a name that matches a
certain regular expression pattern (uses GNU find):
.. code-block:: console
$ find /tmp/somefiles | grep 'PATTERN' > /tmp/files_to_backup
$ find /tmp/somefiles -regex PATTERN -print0 > /tmp/files_to_backup
You can then use restic to backup the filtered files:
.. code-block:: console
$ restic -r /srv/restic-repo backup --files-from /tmp/files_to_backup
$ restic -r /srv/restic-repo backup --files-from-raw /tmp/files_to_backup
Incidentally you can also combine ``--files-from`` with the normal files
args:
You can combine all three options with each other and with the normal file arguments:
.. code-block:: console
$ restic -r /srv/restic-repo backup --files-from /tmp/files_to_backup /tmp/some_additional_file
Paths in the listing file can be absolute or relative. Please note that
patterns listed in a ``--files-from`` file are treated the same way as
exclude patterns are, which means that beginning and trailing spaces are
trimmed and special characters must be escaped. See the documentation
above for more information.
$ restic backup --files-from /tmp/files_to_backup /tmp/some_additional_file
$ restic backup --files-from /tmp/glob-pattern --files-from-raw /tmp/generated-list /tmp/some_additional_file
Comparing Snapshots
*******************
@@ -352,10 +410,6 @@ written, and the next backup needs to write new metadata again. If you really
want to save the access time for files and directories, you can pass the
``--with-atime`` option to the ``backup`` command.
In filesystems that do not support inode consistency, like FUSE-based ones and pCloud, it is
possible to ignore inode on changed files comparison by passing ``--ignore-inode`` to
``backup`` command.
Reading data from stdin
***********************
@@ -446,13 +500,17 @@ environment variables. The following lists these environment variables:
OS_AUTH_URL Auth URL for keystone authentication
OS_REGION_NAME Region name for keystone authentication
OS_USERNAME Username for keystone authentication
OS_USER_ID User ID for keystone v3 authentication
OS_PASSWORD Password for keystone authentication
OS_TENANT_ID Tenant ID for keystone v2 authentication
OS_TENANT_NAME Tenant name for keystone v2 authentication
OS_USER_DOMAIN_NAME User domain name for keystone authentication
OS_USER_DOMAIN_ID User domain ID for keystone v3 authentication
OS_PROJECT_NAME Project name for keystone authentication
OS_PROJECT_DOMAIN_NAME Project domain name for keystone authentication
OS_PROJECT_DOMAIN_ID Project domain ID for keystone v3 authentication
OS_TRUST_ID Trust ID for keystone v3 authentication
OS_APPLICATION_CREDENTIAL_ID Application Credential ID (keystone v3)
OS_APPLICATION_CREDENTIAL_NAME Application Credential Name (keystone v3)

View File

@@ -106,12 +106,16 @@ The example command copies all snapshots from the source repository
Snapshots which have previously been copied between repositories will
be skipped by later copy runs.
.. note:: Note that this process will have to read (download) and write (upload) the
entire snapshot(s) due to the different encryption keys used in the source and
destination repository. Also, the transferred files are not re-chunked, which
may break deduplication between files already stored in the destination repo
and files copied there using this command. See the next section for how to avoid
this problem.
.. important:: This process will have to both download (read) and upload (write)
the entire snapshot(s) due to the different encryption keys used in the
source and destination repository. This *may incur higher bandwidth usage
and costs* than expected during normal backup runs.
.. important:: The copying process does not re-chunk files, which may break
deduplication between the files copied and files already stored in the
destination repository. This means that copied files, which existed in
both the source and destination repository, *may occupy up to twice their
space* in the destination repository. See below for how to avoid this.
For the destination repository ``--repo2`` the password can be read from
a file ``--password-file2`` or from a command ``--password-command2``.
@@ -121,12 +125,15 @@ pass the password via ``$RESTIC_PASSWORD2``. The key which should be used
for decryption can be selected by passing its ID via the flag ``--key-hint2``
or the environment variable ``$RESTIC_KEY_HINT2``.
In case the source and destination repository use the same backend, then
configuration options and environment variables to configure the backend
apply to both repositories. For example it might not be possible to specify
different accounts for the source and destination repository. You can
avoid this limitation by using the rclone backend along with remotes which
are configured in rclone.
.. note:: In case the source and destination repository use the same backend,
the configuration options and environment variables used to configure the
backend may apply to both repositories for example it might not be
possible to specify different accounts for the source and destination
repository. You can avoid this limitation by using the rclone backend
along with remotes which are configured in rclone.
Filtering snapshots to copy
---------------------------
The list of snapshots to copy can be filtered by host, path in the backup
and / or a comma-separated tag list:
@@ -142,7 +149,6 @@ which case only these instead of all snapshots will be copied:
$ restic -r /srv/restic-repo copy --repo2 /srv/restic-repo-copy 410b18a2 4e5d5487 latest
Ensuring deduplication for copied snapshots
-------------------------------------------
@@ -238,12 +244,17 @@ integrity of the pack files in the repository, use the ``--read-data`` flag:
repository, beware that it might incur higher bandwidth costs than usual
and also that it takes more time than the default ``check``.
Alternatively, use the ``--read-data-subset=n/t`` parameter to check only a
subset of the repository pack files at a time. The parameter takes two values,
``n`` and ``t``. When the check command runs, all pack files in the repository
are logically divided in ``t`` (roughly equal) groups, and only files that
belong to group number ``n`` are checked. For example, the following commands
check all repository pack files over 5 separate invocations:
Alternatively, use the ``--read-data-subset`` parameter to check only a
subset of the repository pack files at a time. It supports two ways to select a
subset. One selects a specific range of pack files, the other selects a random
percentage of pack files.
Use ``--read-data-subset=n/t`` to check only a subset of the repository pack
files at a time. The parameter takes two values, ``n`` and ``t``. When the check
command runs, all pack files in the repository are logically divided in ``t``
(roughly equal) groups, and only files that belong to group number ``n`` are
checked. For example, the following commands check all repository pack files
over 5 separate invocations:
.. code-block:: console
@@ -252,3 +263,21 @@ check all repository pack files over 5 separate invocations:
$ restic -r /srv/restic-repo check --read-data-subset=3/5
$ restic -r /srv/restic-repo check --read-data-subset=4/5
$ restic -r /srv/restic-repo check --read-data-subset=5/5
Use ``--read-data-subset=n%`` to check a randomly choosen subset of the
repository pack files. It takes one parameter, ``n``, the percentage of pack
files to check as an integer or floating point number. This will not guarantee
to cover all available pack files after sufficient runs, but it is easy to
automate checking a small subset of data after each backup. For a floating point
value the following command may be used:
.. code-block:: console
$ restic -r /srv/restic-repo check --read-data-subset=2.5%
When checking bigger subsets you most likely specify the percentage as an
integer:
.. code-block:: console
$ restic -r /srv/restic-repo check --read-data-subset=10%

View File

@@ -128,10 +128,13 @@ e.g.:
It is also possible to ``dump`` the contents of a whole folder structure to
stdout. To retain the information about the files and folders Restic will
output the contents in the tar format:
output the contents in the tar (default) or zip format:
.. code-block:: console
$ restic -r /srv/restic-repo dump latest /home/other/work > restore.tar
.. code-block:: console
$ restic -r /srv/restic-repo dump -a zip latest /home/other/work > restore.zip

View File

@@ -23,12 +23,11 @@ data that was referenced by the snapshot from the repository. This can
be automated with the ``--prune`` option of the ``forget`` command,
which runs ``prune`` automatically if snapshots have been removed.
.. Warning::
Pruning snapshots can be a very time-consuming process, taking nearly
as long as backups themselves. During a prune operation, the index is
locked and backups cannot be completed. Performance improvements are
planned for this feature.
Pruning snapshots can be a time-consuming process, depending on the
amount of snapshots and data to process. During a prune operation, the
repository is locked and backups cannot be completed. Please plan your
pruning so that there's time to complete it and it doesn't interfere with
regular backup runs.
It is advisable to run ``restic check`` after pruning, to make sure
you are alerted, should the internal data structures of the repository
@@ -82,20 +81,30 @@ command must be run:
$ restic -r /srv/restic-repo prune
enter password for repository:
counting files in repo
building new index for repo
[0:00] 100.00% 22 / 22 files
repository contains 22 packs (8512 blobs) with 100.092 MiB bytes
processed 8512 blobs: 0 duplicate blobs, 0B duplicate
load all snapshots
find data that is still in use for 1 snapshots
[0:00] 100.00% 1 / 1 snapshots
found 8433 of 8512 data blobs still in use
will rewrite 3 packs
creating new index
[0:00] 86.36% 19 / 22 files
saved new index as 544a5084
repository 33002c5e opened successfully, password is correct
loading all snapshots...
loading indexes...
finding data that is still in use for 4 snapshots
[0:00] 100.00% 4 / 4 snapshots
searching used packs...
collecting packs for deletion and repacking
[0:00] 100.00% 5 / 5 packs processed
to repack: 69 blobs / 1.078 MiB
this removes 67 blobs / 1.047 MiB
to delete: 7 blobs / 25.726 KiB
total prune: 74 blobs / 1.072 MiB
remaining: 16 blobs / 38.003 KiB
unused size after prune: 0 B (0.00% of remaining size)
repacking packs
[0:00] 100.00% 2 / 2 packs repacked
rebuilding index
[0:00] 100.00% 3 / 3 packs processed
deleting obsolete index files
[0:00] 100.00% 3 / 3 files deleted
removing 3 old packs
[0:00] 100.00% 3 / 3 files deleted
done
Afterwards the repository is smaller.
@@ -119,19 +128,29 @@ to ``forget``:
8c02b94b 2017-02-21 10:48:33 mopped /home/user/work
1 snapshots have been removed, running prune
counting files in repo
building new index for repo
[0:00] 100.00% 37 / 37 packs
repository contains 37 packs (5521 blobs) with 151.012 MiB bytes
processed 5521 blobs: 0 duplicate blobs, 0B duplicate
load all snapshots
find data that is still in use for 1 snapshots
loading all snapshots...
loading indexes...
finding data that is still in use for 1 snapshots
[0:00] 100.00% 1 / 1 snapshots
found 5323 of 5521 data blobs still in use, removing 198 blobs
will delete 0 packs and rewrite 27 packs, this frees 22.106 MiB
creating new index
[0:00] 100.00% 30 / 30 packs
saved new index as b49f3e68
searching used packs...
collecting packs for deletion and repacking
[0:00] 100.00% 5 / 5 packs processed
to repack: 69 blobs / 1.078 MiB
this removes 67 blobs / 1.047 MiB
to delete: 7 blobs / 25.726 KiB
total prune: 74 blobs / 1.072 MiB
remaining: 16 blobs / 38.003 KiB
unused size after prune: 0 B (0.00% of remaining size)
repacking packs
[0:00] 100.00% 2 / 2 packs repacked
rebuilding index
[0:00] 100.00% 3 / 3 packs processed
deleting obsolete index files
[0:00] 100.00% 3 / 3 files deleted
removing 3 old packs
[0:00] 100.00% 3 / 3 files deleted
done
Removing snapshots according to a policy
@@ -172,6 +191,10 @@ The ``forget`` command accepts the following parameters:
made in the two years, five months, seven days, and three hours before the
latest snapshot.
.. note:: All calendar related ``--keep-*`` options work on the natural time
boundaries and not relative to when you run the ``forget`` command. Weeks
are Monday 00:00 -> Sunday 23:59, days 00:00 to 23:59, hours :00 to :59, etc.
Multiple policies will be ORed together so as to be as inclusive as possible
for keeping snapshots.
@@ -282,3 +305,59 @@ last-day-of-the-months (11 or 12 depends if the 5 weeklies cross a month).
And finally 75 last-day-of-the-year snapshots. All other snapshots are
removed.
Customize pruning
*****************
To understand the custom options, we first explain how the pruning process works:
1. All snapshots and directories within snapshots are scanned to determine
which data is still in use.
2. For all files in the repository, restic finds out if the file is fully
used, partly used or completely unused.
3. Completely unused files are marked for deletion. Fully used files are kept.
A partially used file is either kept or marked for repacking depending on user
options.
Note that for repacking, restic must download the file from the repository
storage and re-upload the needed data in the repository. This can be very
time-consuming for remote repositories.
4. After deciding what to do, ``prune`` will actually perform the repack, modify
the index according to the changes and delete the obsolete files.
The ``prune`` command accepts the following options:
- ``--max-unused limit`` allow unused data up to the specified limit within the repository.
This allows restic to keep partly used files instead of repacking them.
The limit can be specified in several ways:
* As an absolute size (e.g. ``200M``). If you want to minimize the space
used by your repository, pass ``0`` to this option.
* As a size relative to the total repo size (e.g. ``10%``). This means that
after prune, at most ``10%`` of the total data stored in the repo may be
unused data. If the repo after prune has as size of 500MB, then at most
50MB may be unused.
* If the string ``unlimited`` is passed, there is no limit for partly
unused files. This means that as long as some data is still used within
a file stored in the repo, restic will just leave it there. Use this if
you want to minimize the time and bandwidth used by the ``prune``
operation.
Restic tries to repack as little data as possible while still ensuring this
limit for unused data.
- ``--max-repack-size size`` if set limits the total size of files to repack.
As ``prune`` first stores all repacked files and deletes the obsolete files at the end,
this option might be handy if you expect many files to be repacked and fear to run low
on storage.
- ``--repack-cacheable-only`` if set to true only files which contain
metadata and would be stored in the cache are repacked. Other pack files are
not repacked if this option is set. This allows a very fast repacking
using only cached data. It can, however, imply that the unused data in
your repository exceeds the value given by ``--max-unused``.
The default value is false.
- ``--dry-run`` only show what ``prune`` would do.
- ``--verbose`` increased verbosity shows additional statistics for ``prune``.

Some files were not shown because too many files have changed in this diff Show More