mirror of
https://github.com/restic/restic.git
synced 2026-02-22 16:56:24 +00:00
Compare commits
104 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
303210aa08 | ||
|
|
c029881379 | ||
|
|
6e89963c21 | ||
|
|
1ac560181b | ||
|
|
18ec27a0da | ||
|
|
b40dea29ad | ||
|
|
0561155963 | ||
|
|
1aafc17212 | ||
|
|
f11789c437 | ||
|
|
8cab0c121d | ||
|
|
5979414bcd | ||
|
|
cc8b690b52 | ||
|
|
a164dc9391 | ||
|
|
9a26be4e5b | ||
|
|
733519d895 | ||
|
|
3d5a0c799b | ||
|
|
c4475ac58f | ||
|
|
c9fd9b5275 | ||
|
|
cadcab5a19 | ||
|
|
5ac9c1157a | ||
|
|
5715517e29 | ||
|
|
ecc2458de8 | ||
|
|
2c6ba5d9ac | ||
|
|
0cc3647e51 | ||
|
|
6b700d02f5 | ||
|
|
2b09a10234 | ||
|
|
1c87d01bad | ||
|
|
78a3ffcfb9 | ||
|
|
4d77c0c21c | ||
|
|
fb064afa34 | ||
|
|
7304738872 | ||
|
|
66efa425bf | ||
|
|
d51e9d1b98 | ||
|
|
e046428c94 | ||
|
|
75906edef5 | ||
|
|
203d775190 | ||
|
|
ecd7ee85e8 | ||
|
|
2022355800 | ||
|
|
36f22a0feb | ||
|
|
f58a44b911 | ||
|
|
fe886a6439 | ||
|
|
be23313072 | ||
|
|
3c112d9cae | ||
|
|
2970e38d92 | ||
|
|
870e7583a1 | ||
|
|
db1c835c37 | ||
|
|
190bed9908 | ||
|
|
85f4c826db | ||
|
|
5da4b0fc7d | ||
|
|
c1058005c3 | ||
|
|
ca73808649 | ||
|
|
f2ea91df38 | ||
|
|
15cc4d74b2 | ||
|
|
bf9a507148 | ||
|
|
65b476ead9 | ||
|
|
aaa1cc2c26 | ||
|
|
95434cff16 | ||
|
|
1b94ae1c00 | ||
|
|
d138b38f28 | ||
|
|
db8f5864fc | ||
|
|
1d8b21cdad | ||
|
|
3865b59716 | ||
|
|
7b8d1dc040 | ||
|
|
d19a29f79e | ||
|
|
449c049ce9 | ||
|
|
9f436d80e1 | ||
|
|
e277a92a2f | ||
|
|
d9e22c2df1 | ||
|
|
4b0fb5af36 | ||
|
|
7519c73987 | ||
|
|
45a48eb4a8 | ||
|
|
a2f30cde4c | ||
|
|
6ebcfe7c18 | ||
|
|
0022926eba | ||
|
|
3e3a0220ec | ||
|
|
c125fb763d | ||
|
|
b9f0f031b6 | ||
|
|
aa7043151a | ||
|
|
ebf22a35f4 | ||
|
|
3f069ac404 | ||
|
|
56e5467096 | ||
|
|
5ee932a124 | ||
|
|
fed25714a4 | ||
|
|
8906d85ab8 | ||
|
|
97aafc1eec | ||
|
|
6a5c9f57c2 | ||
|
|
6cf13483b5 | ||
|
|
f645306a18 | ||
|
|
186e10e0cb | ||
|
|
29a5bd5b30 | ||
|
|
06a01bc016 | ||
|
|
cdc287a7f6 | ||
|
|
deedc38129 | ||
|
|
1107eef215 | ||
|
|
60c7020bcb | ||
|
|
b96ef48562 | ||
|
|
6f69ae1b8d | ||
|
|
c4fbf2c779 | ||
|
|
7c084014fa | ||
|
|
879f6e0c81 | ||
|
|
8a97bb8661 | ||
|
|
5fe6de219d | ||
|
|
c13f79da02 | ||
|
|
db82e6b80c |
20
.travis.yml
20
.travis.yml
@@ -3,14 +3,6 @@ sudo: false
|
||||
|
||||
matrix:
|
||||
include:
|
||||
- os: linux
|
||||
go: "1.9.x"
|
||||
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0 RESTIC_BUILD_SOLARIS=0
|
||||
cache:
|
||||
directories:
|
||||
- $HOME/.cache/go-build
|
||||
- $HOME/gopath/pkg/mod
|
||||
|
||||
- os: linux
|
||||
go: "1.10.x"
|
||||
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
||||
@@ -19,9 +11,17 @@ matrix:
|
||||
- $HOME/.cache/go-build
|
||||
- $HOME/gopath/pkg/mod
|
||||
|
||||
# only run fuse and cloud backends tests on Travis for the latest Go on Linux
|
||||
- os: linux
|
||||
go: "1.11.x"
|
||||
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
||||
cache:
|
||||
directories:
|
||||
- $HOME/.cache/go-build
|
||||
- $HOME/gopath/pkg/mod
|
||||
|
||||
# only run fuse and cloud backends tests on Travis for the latest Go on Linux
|
||||
- os: linux
|
||||
go: "1.12.x"
|
||||
sudo: true
|
||||
cache:
|
||||
directories:
|
||||
@@ -29,7 +29,7 @@ matrix:
|
||||
- $HOME/gopath/pkg/mod
|
||||
|
||||
- os: osx
|
||||
go: "1.11.x"
|
||||
go: "1.12.x"
|
||||
env: RESTIC_TEST_FUSE=0 RESTIC_TEST_CLOUD_BACKENDS=0
|
||||
cache:
|
||||
directories:
|
||||
|
||||
142
CHANGELOG.md
142
CHANGELOG.md
@@ -1,3 +1,145 @@
|
||||
Changelog for restic 0.9.5 (2019-04-23)
|
||||
=======================================
|
||||
|
||||
The following sections list the changes in restic 0.9.5 relevant to
|
||||
restic users. The changes are ordered by importance.
|
||||
|
||||
Summary
|
||||
-------
|
||||
|
||||
* Fix #2135: Return error when no bytes could be read from stdin
|
||||
* Fix #2181: Don't cancel timeout after 30 seconds for self-update
|
||||
* Fix #2203: Fix reading passwords from stdin
|
||||
* Fix #2224: Don't abort the find command when a tree can't be loaded
|
||||
* Enh #1895: Add case insensitive include & exclude options
|
||||
* Enh #1937: Support streaming JSON output for backup
|
||||
* Enh #2155: Add Openstack application credential auth for Swift
|
||||
* Enh #2184: Add --json support to forget command
|
||||
* Enh #2037: Add group-by option to snapshots command
|
||||
* Enh #2124: Ability to dump folders to tar via stdout
|
||||
* Enh #2139: Return error if no bytes could be read for `backup --stdin`
|
||||
* Enh #2205: Add --ignore-inode option to backup cmd
|
||||
* Enh #2220: Add config option to set S3 storage class
|
||||
|
||||
Details
|
||||
-------
|
||||
|
||||
* Bugfix #2135: Return error when no bytes could be read from stdin
|
||||
|
||||
We assume that users reading backup data from stdin want to know when no data could be read, so now
|
||||
restic returns an error when `backup --stdin` is called but no bytes could be read. Usually,
|
||||
this means that an earlier command in a pipe has failed. The documentation was amended and now
|
||||
recommends setting the `pipefail` option (`set -o pipefail`).
|
||||
|
||||
https://github.com/restic/restic/pull/2135
|
||||
https://github.com/restic/restic/pull/2139
|
||||
|
||||
* Bugfix #2181: Don't cancel timeout after 30 seconds for self-update
|
||||
|
||||
https://github.com/restic/restic/issues/2181
|
||||
|
||||
* Bugfix #2203: Fix reading passwords from stdin
|
||||
|
||||
Passwords for the `init`, `key add`, and `key passwd` commands can now be read from
|
||||
non-terminal stdin.
|
||||
|
||||
https://github.com/restic/restic/issues/2203
|
||||
|
||||
* Bugfix #2224: Don't abort the find command when a tree can't be loaded
|
||||
|
||||
Change the find command so that missing trees don't result in a crash. Instead, the error is
|
||||
logged to the debug log, and the tree ID is displayed along with the snapshot it belongs to. This
|
||||
makes it possible to recover repositories that are missing trees by forgetting the snapshots
|
||||
they are used in.
|
||||
|
||||
https://github.com/restic/restic/issues/2224
|
||||
|
||||
* Enhancement #1895: Add case insensitive include & exclude options
|
||||
|
||||
The backup and restore commands now have --iexclude and --iinclude flags as case insensitive
|
||||
variants of --exclude and --include.
|
||||
|
||||
https://github.com/restic/restic/issues/1895
|
||||
https://github.com/restic/restic/pull/2032
|
||||
|
||||
* Enhancement #1937: Support streaming JSON output for backup
|
||||
|
||||
We've added support for getting machine-readable status output during backup, just pass the
|
||||
flag `--json` for `restic backup` and restic will output a stream of JSON objects which contain
|
||||
the current progress.
|
||||
|
||||
https://github.com/restic/restic/issues/1937
|
||||
https://github.com/restic/restic/pull/1944
|
||||
|
||||
* Enhancement #2155: Add Openstack application credential auth for Swift
|
||||
|
||||
Since Openstack Queens Identity (auth V3) service supports an application credential auth
|
||||
method. It allows to create a technical account with the limited roles. This commit adds an
|
||||
application credential authentication method for the Swift backend.
|
||||
|
||||
https://github.com/restic/restic/issues/2155
|
||||
|
||||
* Enhancement #2184: Add --json support to forget command
|
||||
|
||||
The forget command now supports the --json argument, outputting the information about what is
|
||||
(or would-be) kept and removed from the repository.
|
||||
|
||||
https://github.com/restic/restic/issues/2184
|
||||
https://github.com/restic/restic/pull/2185
|
||||
|
||||
* Enhancement #2037: Add group-by option to snapshots command
|
||||
|
||||
We have added an option to group the output of the snapshots command, similar to the output of the
|
||||
forget command. The option has been called "--group-by" and accepts any combination of the
|
||||
values "host", "paths" and "tags", separated by commas. Default behavior (not specifying
|
||||
--group-by) has not been changed. We have added support of the grouping to the JSON output.
|
||||
|
||||
https://github.com/restic/restic/issues/2037
|
||||
https://github.com/restic/restic/pull/2087
|
||||
|
||||
* Enhancement #2124: Ability to dump folders to tar via stdout
|
||||
|
||||
We've added the ability to dump whole folders to stdout via the `dump` command. Restic now
|
||||
requires at least Go 1.10 due to a limitation of the standard library for Go <= 1.9.
|
||||
|
||||
https://github.com/restic/restic/issues/2123
|
||||
https://github.com/restic/restic/pull/2124
|
||||
|
||||
* Enhancement #2139: Return error if no bytes could be read for `backup --stdin`
|
||||
|
||||
When restic is used to backup the output of a program, like `mysqldump | restic backup --stdin`,
|
||||
it now returns an error if no bytes could be read at all. This catches the failure case when
|
||||
`mysqldump` failed for some reason and did not output any data to stdout.
|
||||
|
||||
https://github.com/restic/restic/pull/2139
|
||||
|
||||
* Enhancement #2205: Add --ignore-inode option to backup cmd
|
||||
|
||||
This option handles backup of virtual filesystems that do not keep fixed inodes for files, like
|
||||
Fuse-based, pCloud, etc. Ignoring inode changes allows to consider the file as unchanged if
|
||||
last modification date and size are unchanged.
|
||||
|
||||
https://github.com/restic/restic/issues/1631
|
||||
https://github.com/restic/restic/pull/2205
|
||||
https://github.com/restic/restic/pull/2047
|
||||
|
||||
* Enhancement #2220: Add config option to set S3 storage class
|
||||
|
||||
The `s3.storage-class` option can be passed to restic (using `-o`) to specify the storage
|
||||
class to be used for S3 objects created by restic.
|
||||
|
||||
The storage class is passed as-is to S3, so it needs to be understood by the API. On AWS, it can be
|
||||
one of `STANDARD`, `STANDARD_IA`, `ONEZONE_IA`, `INTELLIGENT_TIERING` and
|
||||
`REDUCED_REDUNDANCY`. If unspecified, the default storage class is used (`STANDARD` on
|
||||
AWS).
|
||||
|
||||
You can mix storage classes in the same bucket, and the setting isn't stored in the restic
|
||||
repository, so be sure to specify it with each command that writes to S3.
|
||||
|
||||
https://github.com/restic/restic/issues/706
|
||||
https://github.com/restic/restic/pull/2220
|
||||
|
||||
|
||||
Changelog for restic 0.9.4 (2019-01-06)
|
||||
=======================================
|
||||
|
||||
|
||||
@@ -20,8 +20,8 @@ init:
|
||||
|
||||
install:
|
||||
- rmdir c:\go /s /q
|
||||
- appveyor DownloadFile https://dl.google.com/go/go1.11.windows-amd64.msi
|
||||
- msiexec /i go1.11.windows-amd64.msi /q
|
||||
- appveyor DownloadFile https://dl.google.com/go/go1.12.1.windows-amd64.msi
|
||||
- msiexec /i go1.12.1.windows-amd64.msi /q
|
||||
- go version
|
||||
- go env
|
||||
- appveyor DownloadFile http://sourceforge.netcologne.de/project/gnuwin32/tar/1.13-1/tar-1.13-1-bin.zip -FileName tar.zip
|
||||
|
||||
12
build.go
12
build.go
@@ -60,12 +60,12 @@ import (
|
||||
|
||||
// config contains the configuration for the program to build.
|
||||
var config = Config{
|
||||
Name: "restic", // name of the program executable and directory
|
||||
Namespace: "github.com/restic/restic", // subdir of GOPATH, e.g. "github.com/foo/bar"
|
||||
Main: "./cmd/restic", // package name for the main package
|
||||
DefaultBuildTags: []string{"selfupdate"}, // specify build tags which are always used
|
||||
Tests: []string{"./..."}, // tests to run
|
||||
MinVersion: GoVersion{Major: 1, Minor: 9, Patch: 0}, // minimum Go version supported
|
||||
Name: "restic", // name of the program executable and directory
|
||||
Namespace: "github.com/restic/restic", // subdir of GOPATH, e.g. "github.com/foo/bar"
|
||||
Main: "./cmd/restic", // package name for the main package
|
||||
DefaultBuildTags: []string{"selfupdate"}, // specify build tags which are always used
|
||||
Tests: []string{"./..."}, // tests to run
|
||||
MinVersion: GoVersion{Major: 1, Minor: 10, Patch: 0}, // minimum Go version supported
|
||||
}
|
||||
|
||||
// Config configures the build.
|
||||
|
||||
7
changelog/0.9.5_2019-04-23/issue-1895
Normal file
7
changelog/0.9.5_2019-04-23/issue-1895
Normal file
@@ -0,0 +1,7 @@
|
||||
Enhancement: Add case insensitive include & exclude options
|
||||
|
||||
The backup and restore commands now have --iexclude and --iinclude flags
|
||||
as case insensitive variants of --exclude and --include.
|
||||
|
||||
https://github.com/restic/restic/issues/1895
|
||||
https://github.com/restic/restic/pull/2032
|
||||
8
changelog/0.9.5_2019-04-23/issue-1937
Normal file
8
changelog/0.9.5_2019-04-23/issue-1937
Normal file
@@ -0,0 +1,8 @@
|
||||
Enhancement: Support streaming JSON output for backup
|
||||
|
||||
We've added support for getting machine-readable status output during backup,
|
||||
just pass the flag `--json` for `restic backup` and restic will output a stream
|
||||
of JSON objects which contain the current progress.
|
||||
|
||||
https://github.com/restic/restic/issues/1937
|
||||
https://github.com/restic/restic/pull/1944
|
||||
10
changelog/0.9.5_2019-04-23/issue-2135
Normal file
10
changelog/0.9.5_2019-04-23/issue-2135
Normal file
@@ -0,0 +1,10 @@
|
||||
Bugfix: Return error when no bytes could be read from stdin
|
||||
|
||||
We assume that users reading backup data from stdin want to know when no data
|
||||
could be read, so now restic returns an error when `backup --stdin` is called
|
||||
but no bytes could be read. Usually, this means that an earlier command in a
|
||||
pipe has failed. The documentation was amended and now recommends setting the
|
||||
`pipefail` option (`set -o pipefail`).
|
||||
|
||||
https://github.com/restic/restic/pull/2135
|
||||
https://github.com/restic/restic/pull/2139
|
||||
8
changelog/0.9.5_2019-04-23/issue-2155
Normal file
8
changelog/0.9.5_2019-04-23/issue-2155
Normal file
@@ -0,0 +1,8 @@
|
||||
Enhancement: add Openstack application credential auth for Swift
|
||||
|
||||
Since Openstack Queens Identity (auth V3) service supports an application
|
||||
credential auth method. It allows to create a technical account with the
|
||||
limited roles. This commit adds an application credential authentication
|
||||
method for the Swift backend.
|
||||
|
||||
https://github.com/restic/restic/issues/2155
|
||||
3
changelog/0.9.5_2019-04-23/issue-2181
Normal file
3
changelog/0.9.5_2019-04-23/issue-2181
Normal file
@@ -0,0 +1,3 @@
|
||||
Bugfix: Don't cancel timeout after 30 seconds for self-update
|
||||
|
||||
https://github.com/restic/restic/issues/2181
|
||||
8
changelog/0.9.5_2019-04-23/issue-2184
Normal file
8
changelog/0.9.5_2019-04-23/issue-2184
Normal file
@@ -0,0 +1,8 @@
|
||||
Enhancement: Add --json support to forget command
|
||||
|
||||
The forget command now supports the --json argument, outputting the
|
||||
information about what is (or would-be) kept and removed from the
|
||||
repository.
|
||||
|
||||
https://github.com/restic/restic/issues/2184
|
||||
https://github.com/restic/restic/pull/2185
|
||||
6
changelog/0.9.5_2019-04-23/issue-2203
Normal file
6
changelog/0.9.5_2019-04-23/issue-2203
Normal file
@@ -0,0 +1,6 @@
|
||||
Bugfix: Fix reading passwords from stdin
|
||||
|
||||
Passwords for the `init`, `key add`, and `key passwd` commands can now be read from
|
||||
non-terminal stdin.
|
||||
|
||||
https://github.com/restic/restic/issues/2203
|
||||
9
changelog/0.9.5_2019-04-23/issue-2224
Normal file
9
changelog/0.9.5_2019-04-23/issue-2224
Normal file
@@ -0,0 +1,9 @@
|
||||
Bugfix: Don't abort the find command when a tree can't be loaded
|
||||
|
||||
Change the find command so that missing trees don't result in a crash.
|
||||
Instead, the error is logged to the debug log, and the tree ID is displayed
|
||||
along with the snapshot it belongs to. This makes it possible to recover
|
||||
repositories that are missing trees by forgetting the snapshots they are used
|
||||
in.
|
||||
|
||||
https://github.com/restic/restic/issues/2224
|
||||
10
changelog/0.9.5_2019-04-23/pull-2087
Normal file
10
changelog/0.9.5_2019-04-23/pull-2087
Normal file
@@ -0,0 +1,10 @@
|
||||
Enhancement: Add group-by option to snapshots command
|
||||
|
||||
We have added an option to group the output of the snapshots command, similar
|
||||
to the output of the forget command. The option has been called "--group-by"
|
||||
and accepts any combination of the values "host", "paths" and "tags", separated
|
||||
by commas. Default behavior (not specifying --group-by) has not been changed.
|
||||
We have added support of the grouping to the JSON output.
|
||||
|
||||
https://github.com/restic/restic/issues/2037
|
||||
https://github.com/restic/restic/pull/2087
|
||||
8
changelog/0.9.5_2019-04-23/pull-2124
Normal file
8
changelog/0.9.5_2019-04-23/pull-2124
Normal file
@@ -0,0 +1,8 @@
|
||||
Enhancement: Ability to dump folders to tar via stdout
|
||||
|
||||
We've added the ability to dump whole folders to stdout via the `dump` command.
|
||||
Restic now requires at least Go 1.10 due to a limitation of the standard
|
||||
library for Go <= 1.9.
|
||||
|
||||
https://github.com/restic/restic/pull/2124
|
||||
https://github.com/restic/restic/issues/2123
|
||||
8
changelog/0.9.5_2019-04-23/pull-2139
Normal file
8
changelog/0.9.5_2019-04-23/pull-2139
Normal file
@@ -0,0 +1,8 @@
|
||||
Enhancement: Return error if no bytes could be read for `backup --stdin`
|
||||
|
||||
When restic is used to backup the output of a program, like `mysqldump | restic
|
||||
backup --stdin`, it now returns an error if no bytes could be read at all. This
|
||||
catches the failure case when `mysqldump` failed for some reason and did not
|
||||
output any data to stdout.
|
||||
|
||||
https://github.com/restic/restic/pull/2139
|
||||
10
changelog/0.9.5_2019-04-23/pull-2205
Normal file
10
changelog/0.9.5_2019-04-23/pull-2205
Normal file
@@ -0,0 +1,10 @@
|
||||
Enhancement: Add --ignore-inode option to backup cmd
|
||||
|
||||
This option handles backup of virtual filesystems that do not keep fixed
|
||||
inodes for files, like Fuse-based, pCloud, etc. Ignoring inode changes allows
|
||||
to consider the file as unchanged if last modification date and size
|
||||
are unchanged.
|
||||
|
||||
https://github.com/restic/restic/pull/2205
|
||||
https://github.com/restic/restic/pull/2047
|
||||
https://github.com/restic/restic/issues/1631
|
||||
16
changelog/0.9.5_2019-04-23/pull-2220
Normal file
16
changelog/0.9.5_2019-04-23/pull-2220
Normal file
@@ -0,0 +1,16 @@
|
||||
Enhancement: Add config option to set S3 storage class
|
||||
|
||||
The `s3.storage-class` option can be passed to restic (using `-o`) to
|
||||
specify the storage class to be used for S3 objects created by restic.
|
||||
|
||||
The storage class is passed as-is to S3, so it needs to be understood by
|
||||
the API. On AWS, it can be one of `STANDARD`, `STANDARD_IA`,
|
||||
`ONEZONE_IA`, `INTELLIGENT_TIERING` and `REDUCED_REDUNDANCY`. If
|
||||
unspecified, the default storage class is used (`STANDARD` on AWS).
|
||||
|
||||
You can mix storage classes in the same bucket, and the setting isn't
|
||||
stored in the restic repository, so be sure to specify it with each
|
||||
command that writes to S3.
|
||||
|
||||
https://github.com/restic/restic/pull/2220
|
||||
https://github.com/restic/restic/issues/706
|
||||
131
cmd/restic/acl.go
Normal file
131
cmd/restic/acl.go
Normal file
@@ -0,0 +1,131 @@
|
||||
package main
|
||||
|
||||
// Adapted from https://github.com/maxymania/go-system/blob/master/posix_acl/posix_acl.go
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
const (
|
||||
aclUserOwner = 0x0001
|
||||
aclUser = 0x0002
|
||||
aclGroupOwner = 0x0004
|
||||
aclGroup = 0x0008
|
||||
aclMask = 0x0010
|
||||
aclOthers = 0x0020
|
||||
)
|
||||
|
||||
type aclSID uint64
|
||||
|
||||
type aclElem struct {
|
||||
Tag uint16
|
||||
Perm uint16
|
||||
ID uint32
|
||||
}
|
||||
|
||||
type acl struct {
|
||||
Version uint32
|
||||
List []aclElement
|
||||
}
|
||||
|
||||
type aclElement struct {
|
||||
aclSID
|
||||
Perm uint16
|
||||
}
|
||||
|
||||
func (a *aclSID) setUID(uid uint32) {
|
||||
*a = aclSID(uid) | (aclUser << 32)
|
||||
}
|
||||
func (a *aclSID) setGID(gid uint32) {
|
||||
*a = aclSID(gid) | (aclGroup << 32)
|
||||
}
|
||||
|
||||
func (a *aclSID) setType(tp int) {
|
||||
*a = aclSID(tp) << 32
|
||||
}
|
||||
|
||||
func (a aclSID) getType() int {
|
||||
return int(a >> 32)
|
||||
}
|
||||
func (a aclSID) getID() uint32 {
|
||||
return uint32(a & 0xffffffff)
|
||||
}
|
||||
func (a aclSID) String() string {
|
||||
switch a >> 32 {
|
||||
case aclUserOwner:
|
||||
return "user::"
|
||||
case aclUser:
|
||||
return fmt.Sprintf("user:%v:", a.getID())
|
||||
case aclGroupOwner:
|
||||
return "group::"
|
||||
case aclGroup:
|
||||
return fmt.Sprintf("group:%v:", a.getID())
|
||||
case aclMask:
|
||||
return "mask::"
|
||||
case aclOthers:
|
||||
return "other::"
|
||||
}
|
||||
return "?:"
|
||||
}
|
||||
|
||||
func (a aclElement) String() string {
|
||||
str := ""
|
||||
if (a.Perm & 4) != 0 {
|
||||
str += "r"
|
||||
} else {
|
||||
str += "-"
|
||||
}
|
||||
if (a.Perm & 2) != 0 {
|
||||
str += "w"
|
||||
} else {
|
||||
str += "-"
|
||||
}
|
||||
if (a.Perm & 1) != 0 {
|
||||
str += "x"
|
||||
} else {
|
||||
str += "-"
|
||||
}
|
||||
return fmt.Sprintf("%v%v", a.aclSID, str)
|
||||
}
|
||||
|
||||
func (a *acl) decode(xattr []byte) {
|
||||
var elem aclElement
|
||||
ae := new(aclElem)
|
||||
nr := bytes.NewReader(xattr)
|
||||
e := binary.Read(nr, binary.LittleEndian, &a.Version)
|
||||
if e != nil {
|
||||
a.Version = 0
|
||||
return
|
||||
}
|
||||
if len(a.List) > 0 {
|
||||
a.List = a.List[:0]
|
||||
}
|
||||
for binary.Read(nr, binary.LittleEndian, ae) == nil {
|
||||
elem.aclSID = (aclSID(ae.Tag) << 32) | aclSID(ae.ID)
|
||||
elem.Perm = ae.Perm
|
||||
a.List = append(a.List, elem)
|
||||
}
|
||||
}
|
||||
|
||||
func (a *acl) encode() []byte {
|
||||
buf := new(bytes.Buffer)
|
||||
ae := new(aclElem)
|
||||
binary.Write(buf, binary.LittleEndian, &a.Version)
|
||||
for _, elem := range a.List {
|
||||
ae.Tag = uint16(elem.getType())
|
||||
ae.Perm = elem.Perm
|
||||
ae.ID = elem.getID()
|
||||
binary.Write(buf, binary.LittleEndian, ae)
|
||||
}
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func (a *acl) String() string {
|
||||
var finalacl string
|
||||
for _, acl := range a.List {
|
||||
finalacl += acl.String() + "\n"
|
||||
}
|
||||
return finalacl
|
||||
}
|
||||
96
cmd/restic/acl_test.go
Normal file
96
cmd/restic/acl_test.go
Normal file
@@ -0,0 +1,96 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func Test_acl_decode(t *testing.T) {
|
||||
type args struct {
|
||||
xattr []byte
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want string
|
||||
}{
|
||||
{
|
||||
name: "decode string",
|
||||
args: args{
|
||||
xattr: []byte{2, 0, 0, 0, 1, 0, 6, 0, 255, 255, 255, 255, 2, 0, 7, 0, 0, 0, 0, 0, 2, 0, 7, 0, 254, 255, 0, 0, 4, 0, 7, 0, 255, 255, 255, 255, 16, 0, 7, 0, 255, 255, 255, 255, 32, 0, 4, 0, 255, 255, 255, 255},
|
||||
},
|
||||
want: "user::rw-\nuser:0:rwx\nuser:65534:rwx\ngroup::rwx\nmask::rwx\nother::r--\n",
|
||||
},
|
||||
{
|
||||
name: "decode fail",
|
||||
args: args{
|
||||
xattr: []byte("abctest"),
|
||||
},
|
||||
want: "",
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
a := &acl{}
|
||||
a.decode(tt.args.xattr)
|
||||
if tt.want != a.String() {
|
||||
t.Errorf("acl.decode() = %v, want: %v", a.String(), tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_acl_encode(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
want []byte
|
||||
args []aclElement
|
||||
}{
|
||||
{
|
||||
name: "encode values",
|
||||
want: []byte{2, 0, 0, 0, 1, 0, 6, 0, 255, 255, 255, 255, 2, 0, 7, 0, 0, 0, 0, 0, 2, 0, 7, 0, 254, 255, 0, 0, 4, 0, 7, 0, 255, 255, 255, 255, 16, 0, 7, 0, 255, 255, 255, 255, 32, 0, 4, 0, 255, 255, 255, 255},
|
||||
args: []aclElement{
|
||||
{
|
||||
aclSID: 8589934591,
|
||||
Perm: 6,
|
||||
},
|
||||
{
|
||||
aclSID: 8589934592,
|
||||
Perm: 7,
|
||||
},
|
||||
{
|
||||
aclSID: 8590000126,
|
||||
Perm: 7,
|
||||
},
|
||||
{
|
||||
aclSID: 21474836479,
|
||||
Perm: 7,
|
||||
},
|
||||
{
|
||||
aclSID: 73014444031,
|
||||
Perm: 7,
|
||||
},
|
||||
{
|
||||
aclSID: 141733920767,
|
||||
Perm: 4,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "encode fail",
|
||||
want: []byte{2, 0, 0, 0},
|
||||
args: []aclElement{},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
a := &acl{
|
||||
Version: 2,
|
||||
List: tt.args,
|
||||
}
|
||||
if got := a.encode(); !reflect.DeepEqual(got, tt.want) {
|
||||
t.Errorf("acl.encode() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
@@ -23,6 +24,7 @@ import (
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"github.com/restic/restic/internal/textfile"
|
||||
"github.com/restic/restic/internal/ui"
|
||||
"github.com/restic/restic/internal/ui/jsonstatus"
|
||||
"github.com/restic/restic/internal/ui/termstatus"
|
||||
)
|
||||
|
||||
@@ -68,20 +70,22 @@ given as the arguments.
|
||||
|
||||
// BackupOptions bundles all options for the backup command.
|
||||
type BackupOptions struct {
|
||||
Parent string
|
||||
Force bool
|
||||
Excludes []string
|
||||
ExcludeFiles []string
|
||||
ExcludeOtherFS bool
|
||||
ExcludeIfPresent []string
|
||||
ExcludeCaches bool
|
||||
Stdin bool
|
||||
StdinFilename string
|
||||
Tags []string
|
||||
Host string
|
||||
FilesFrom []string
|
||||
TimeStamp string
|
||||
WithAtime bool
|
||||
Parent string
|
||||
Force bool
|
||||
Excludes []string
|
||||
InsensitiveExcludes []string
|
||||
ExcludeFiles []string
|
||||
ExcludeOtherFS bool
|
||||
ExcludeIfPresent []string
|
||||
ExcludeCaches bool
|
||||
Stdin bool
|
||||
StdinFilename string
|
||||
Tags []string
|
||||
Host string
|
||||
FilesFrom []string
|
||||
TimeStamp string
|
||||
WithAtime bool
|
||||
IgnoreInode bool
|
||||
}
|
||||
|
||||
var backupOptions BackupOptions
|
||||
@@ -93,10 +97,11 @@ func init() {
|
||||
f.StringVar(&backupOptions.Parent, "parent", "", "use this parent snapshot (default: last snapshot in the repo that has the same target files/directories)")
|
||||
f.BoolVarP(&backupOptions.Force, "force", "f", false, `force re-reading the target files/directories (overrides the "parent" flag)`)
|
||||
f.StringArrayVarP(&backupOptions.Excludes, "exclude", "e", nil, "exclude a `pattern` (can be specified multiple times)")
|
||||
f.StringArrayVar(&backupOptions.InsensitiveExcludes, "iexclude", nil, "same as `--exclude` but ignores the casing of filenames")
|
||||
f.StringArrayVar(&backupOptions.ExcludeFiles, "exclude-file", nil, "read exclude patterns from a `file` (can be specified multiple times)")
|
||||
f.BoolVarP(&backupOptions.ExcludeOtherFS, "one-file-system", "x", false, "exclude other file systems")
|
||||
f.StringArrayVar(&backupOptions.ExcludeIfPresent, "exclude-if-present", nil, "takes filename[:header], exclude contents of directories containing filename (except filename itself) if header of that file is as provided (can be specified multiple times)")
|
||||
f.BoolVar(&backupOptions.ExcludeCaches, "exclude-caches", false, `excludes cache directories that are marked with a CACHEDIR.TAG file`)
|
||||
f.BoolVar(&backupOptions.ExcludeCaches, "exclude-caches", false, `excludes cache directories that are marked with a CACHEDIR.TAG file. See http://bford.info/cachedir/spec.html for the Cache Directory Tagging Standard`)
|
||||
f.BoolVar(&backupOptions.Stdin, "stdin", false, "read backup from stdin")
|
||||
f.StringVar(&backupOptions.StdinFilename, "stdin-filename", "stdin", "file name to use when reading from stdin")
|
||||
f.StringArrayVar(&backupOptions.Tags, "tag", nil, "add a `tag` for the new snapshot (can be specified multiple times)")
|
||||
@@ -108,6 +113,7 @@ func init() {
|
||||
f.StringArrayVar(&backupOptions.FilesFrom, "files-from", nil, "read the files to backup from file (can be combined with file args/can be specified multiple times)")
|
||||
f.StringVar(&backupOptions.TimeStamp, "time", "", "time of the backup (ex. '2012-11-01 22:08:41') (default: now)")
|
||||
f.BoolVar(&backupOptions.WithAtime, "with-atime", false, "store the atime for all files and directories")
|
||||
f.BoolVar(&backupOptions.IgnoreInode, "ignore-inode", false, "ignore inode number changes when checking for modified files")
|
||||
}
|
||||
|
||||
// filterExisting returns a slice of all existing items, or an error if no
|
||||
@@ -222,6 +228,10 @@ func collectRejectByNameFuncs(opts BackupOptions, repo *repository.Repository, t
|
||||
opts.Excludes = append(opts.Excludes, excludes...)
|
||||
}
|
||||
|
||||
if len(opts.InsensitiveExcludes) > 0 {
|
||||
fs = append(fs, rejectByInsensitivePattern(opts.InsensitiveExcludes))
|
||||
}
|
||||
|
||||
if len(opts.Excludes) > 0 {
|
||||
fs = append(fs, rejectByPattern(opts.Excludes))
|
||||
}
|
||||
@@ -395,13 +405,43 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
|
||||
var t tomb.Tomb
|
||||
|
||||
term.Print("open repository\n")
|
||||
if gopts.verbosity >= 2 && !gopts.JSON {
|
||||
term.Print("open repository\n")
|
||||
}
|
||||
|
||||
repo, err := OpenRepository(gopts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p := ui.NewBackup(term, gopts.verbosity)
|
||||
type ArchiveProgressReporter interface {
|
||||
CompleteItem(item string, previous, current *restic.Node, s archiver.ItemStats, d time.Duration)
|
||||
StartFile(filename string)
|
||||
CompleteBlob(filename string, bytes uint64)
|
||||
ScannerError(item string, fi os.FileInfo, err error) error
|
||||
ReportTotal(item string, s archiver.ScanStats)
|
||||
SetMinUpdatePause(d time.Duration)
|
||||
Run(ctx context.Context) error
|
||||
Error(item string, fi os.FileInfo, err error) error
|
||||
Finish(snapshotID restic.ID)
|
||||
|
||||
// ui.StdioWrapper
|
||||
Stdout() io.WriteCloser
|
||||
Stderr() io.WriteCloser
|
||||
|
||||
// ui.Message
|
||||
E(msg string, args ...interface{})
|
||||
P(msg string, args ...interface{})
|
||||
V(msg string, args ...interface{})
|
||||
VV(msg string, args ...interface{})
|
||||
}
|
||||
|
||||
var p ArchiveProgressReporter
|
||||
if gopts.JSON {
|
||||
p = jsonstatus.NewBackup(term, gopts.verbosity)
|
||||
} else {
|
||||
p = ui.NewBackup(term, gopts.verbosity)
|
||||
}
|
||||
|
||||
// use the terminal for stdout/stderr
|
||||
prevStdout, prevStderr := gopts.stdout, gopts.stderr
|
||||
@@ -416,13 +456,15 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
if fps > 60 {
|
||||
fps = 60
|
||||
}
|
||||
p.MinUpdatePause = time.Second / time.Duration(fps)
|
||||
p.SetMinUpdatePause(time.Second / time.Duration(fps))
|
||||
}
|
||||
}
|
||||
|
||||
t.Go(func() error { return p.Run(t.Context(gopts.ctx)) })
|
||||
|
||||
p.V("lock repository")
|
||||
if !gopts.JSON {
|
||||
p.V("lock repository")
|
||||
}
|
||||
lock, err := lockRepo(repo)
|
||||
defer unlockRepo(lock)
|
||||
if err != nil {
|
||||
@@ -441,7 +483,9 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
return err
|
||||
}
|
||||
|
||||
p.V("load index files")
|
||||
if !gopts.JSON {
|
||||
p.V("load index files")
|
||||
}
|
||||
err = repo.LoadIndex(gopts.ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -452,7 +496,7 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
return err
|
||||
}
|
||||
|
||||
if parentSnapshotID != nil {
|
||||
if !gopts.JSON && parentSnapshotID != nil {
|
||||
p.V("using parent snapshot %v\n", parentSnapshotID.Str())
|
||||
}
|
||||
|
||||
@@ -476,7 +520,9 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
|
||||
var targetFS fs.FS = fs.Local{}
|
||||
if opts.Stdin {
|
||||
p.V("read data from stdin")
|
||||
if !gopts.JSON {
|
||||
p.V("read data from stdin")
|
||||
}
|
||||
targetFS = &fs.Reader{
|
||||
ModTime: timeStamp,
|
||||
Name: opts.StdinFilename,
|
||||
@@ -492,7 +538,9 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
sc.Error = p.ScannerError
|
||||
sc.Result = p.ReportTotal
|
||||
|
||||
p.V("start scan on %v", targets)
|
||||
if !gopts.JSON {
|
||||
p.V("start scan on %v", targets)
|
||||
}
|
||||
t.Go(func() error { return sc.Scan(t.Context(gopts.ctx), targets) })
|
||||
|
||||
arch := archiver.New(repo, targetFS, archiver.Options{})
|
||||
@@ -500,9 +548,10 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
arch.Select = selectFilter
|
||||
arch.WithAtime = opts.WithAtime
|
||||
arch.Error = p.Error
|
||||
arch.CompleteItem = p.CompleteItemFn
|
||||
arch.CompleteItem = p.CompleteItem
|
||||
arch.StartFile = p.StartFile
|
||||
arch.CompleteBlob = p.CompleteBlob
|
||||
arch.IgnoreInode = opts.IgnoreInode
|
||||
|
||||
if parentSnapshotID == nil {
|
||||
parentSnapshotID = &restic.ID{}
|
||||
@@ -519,10 +568,14 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
uploader := archiver.IndexUploader{
|
||||
Repository: repo,
|
||||
Start: func() {
|
||||
p.VV("uploading intermediate index")
|
||||
if !gopts.JSON {
|
||||
p.VV("uploading intermediate index")
|
||||
}
|
||||
},
|
||||
Complete: func(id restic.ID) {
|
||||
p.V("uploaded intermediate index %v", id.Str())
|
||||
if !gopts.JSON {
|
||||
p.V("uploaded intermediate index %v", id.Str())
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
@@ -530,14 +583,18 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, term *termstatus.Termina
|
||||
return uploader.Upload(gopts.ctx, t.Context(gopts.ctx), 30*time.Second)
|
||||
})
|
||||
|
||||
p.V("start backup on %v", targets)
|
||||
if !gopts.JSON {
|
||||
p.V("start backup on %v", targets)
|
||||
}
|
||||
_, id, err := arch.Snapshot(gopts.ctx, targets, snapshotOpts)
|
||||
if err != nil {
|
||||
return errors.Fatalf("unable to save snapshot: %v", err)
|
||||
}
|
||||
|
||||
p.Finish()
|
||||
p.P("snapshot %s saved\n", id.Str())
|
||||
p.Finish(id)
|
||||
if !gopts.JSON {
|
||||
p.P("snapshot %s saved\n", id.Str())
|
||||
}
|
||||
|
||||
// cleanly shutdown all running goroutines
|
||||
t.Kill(nil)
|
||||
|
||||
@@ -74,7 +74,7 @@ func runCat(gopts GlobalOptions, args []string) error {
|
||||
fmt.Println(string(buf))
|
||||
return nil
|
||||
case "index":
|
||||
buf, err := repo.LoadAndDecrypt(gopts.ctx, restic.IndexFile, id)
|
||||
buf, err := repo.LoadAndDecrypt(gopts.ctx, nil, restic.IndexFile, id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -99,7 +99,7 @@ func runCat(gopts GlobalOptions, args []string) error {
|
||||
return nil
|
||||
case "key":
|
||||
h := restic.Handle{Type: restic.KeyFile, Name: id.String()}
|
||||
buf, err := backend.LoadAll(gopts.ctx, repo.Backend(), h)
|
||||
buf, err := backend.LoadAll(gopts.ctx, nil, repo.Backend(), h)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -150,7 +150,7 @@ func runCat(gopts GlobalOptions, args []string) error {
|
||||
switch tpe {
|
||||
case "pack":
|
||||
h := restic.Handle{Type: restic.DataFile, Name: id.String()}
|
||||
buf, err := backend.LoadAll(gopts.ctx, repo.Backend(), h)
|
||||
buf, err := backend.LoadAll(gopts.ctx, nil, repo.Backend(), h)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -1,15 +1,19 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/restic/restic/internal/debug"
|
||||
"github.com/restic/restic/internal/errors"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"github.com/restic/restic/internal/walker"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
@@ -50,41 +54,18 @@ func init() {
|
||||
|
||||
func splitPath(p string) []string {
|
||||
d, f := path.Split(p)
|
||||
if d == "" || d == "/" {
|
||||
if d == "" {
|
||||
return []string{f}
|
||||
}
|
||||
if d == "/" {
|
||||
return []string{d}
|
||||
}
|
||||
s := splitPath(path.Clean(d))
|
||||
return append(s, f)
|
||||
}
|
||||
|
||||
func dumpNode(ctx context.Context, repo restic.Repository, node *restic.Node) error {
|
||||
var buf []byte
|
||||
for _, id := range node.Content {
|
||||
size, found := repo.LookupBlobSize(id, restic.DataBlob)
|
||||
if !found {
|
||||
return errors.Errorf("id %v not found in repository", id)
|
||||
}
|
||||
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repository, prefix string, pathComponents []string, pathToPrint string) error {
|
||||
|
||||
buf = buf[:cap(buf)]
|
||||
if len(buf) < restic.CiphertextLength(int(size)) {
|
||||
buf = restic.NewBlobBuffer(int(size))
|
||||
}
|
||||
|
||||
n, err := repo.LoadBlob(ctx, restic.DataBlob, id, buf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
buf = buf[:n]
|
||||
|
||||
_, err = os.Stdout.Write(buf)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Write")
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repository, prefix string, pathComponents []string) error {
|
||||
if tree == nil {
|
||||
return fmt.Errorf("called with a nil tree")
|
||||
}
|
||||
@@ -97,16 +78,19 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.Repositor
|
||||
}
|
||||
item := filepath.Join(prefix, pathComponents[0])
|
||||
for _, node := range tree.Nodes {
|
||||
if node.Name == pathComponents[0] {
|
||||
if node.Name == pathComponents[0] || pathComponents[0] == "/" {
|
||||
switch {
|
||||
case l == 1 && node.Type == "file":
|
||||
return dumpNode(ctx, repo, node)
|
||||
return getNodeData(ctx, os.Stdout, repo, node)
|
||||
case l > 1 && node.Type == "dir":
|
||||
subtree, err := repo.LoadTree(ctx, *node.Subtree)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "cannot load subtree for %q", item)
|
||||
}
|
||||
return printFromTree(ctx, subtree, repo, item, pathComponents[1:])
|
||||
return printFromTree(ctx, subtree, repo, item, pathComponents[1:], pathToPrint)
|
||||
case node.Type == "dir":
|
||||
node.Path = pathToPrint
|
||||
return tarTree(ctx, repo, node, pathToPrint)
|
||||
case l > 1:
|
||||
return fmt.Errorf("%q should be a dir, but s a %q", item, node.Type)
|
||||
case node.Type != "file":
|
||||
@@ -129,7 +113,7 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
|
||||
|
||||
debug.Log("dump file %q from %q", pathToPrint, snapshotIDString)
|
||||
|
||||
splittedPath := splitPath(pathToPrint)
|
||||
splittedPath := splitPath(path.Clean(pathToPrint))
|
||||
|
||||
repo, err := OpenRepository(gopts)
|
||||
if err != nil {
|
||||
@@ -173,10 +157,143 @@ func runDump(opts DumpOptions, gopts GlobalOptions, args []string) error {
|
||||
Exitf(2, "loading tree for snapshot %q failed: %v", snapshotIDString, err)
|
||||
}
|
||||
|
||||
err = printFromTree(ctx, tree, repo, "", splittedPath)
|
||||
err = printFromTree(ctx, tree, repo, "", splittedPath, pathToPrint)
|
||||
if err != nil {
|
||||
Exitf(2, "cannot dump file: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func getNodeData(ctx context.Context, output io.Writer, repo restic.Repository, node *restic.Node) error {
|
||||
var buf []byte
|
||||
for _, id := range node.Content {
|
||||
|
||||
size, found := repo.LookupBlobSize(id, restic.DataBlob)
|
||||
if !found {
|
||||
return errors.Errorf("id %v not found in repository", id)
|
||||
}
|
||||
|
||||
buf = buf[:cap(buf)]
|
||||
if len(buf) < restic.CiphertextLength(int(size)) {
|
||||
buf = restic.NewBlobBuffer(int(size))
|
||||
}
|
||||
|
||||
n, err := repo.LoadBlob(ctx, restic.DataBlob, id, buf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
buf = buf[:n]
|
||||
|
||||
_, err = output.Write(buf)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Write")
|
||||
}
|
||||
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func tarTree(ctx context.Context, repo restic.Repository, rootNode *restic.Node, rootPath string) error {
|
||||
|
||||
if stdoutIsTerminal() {
|
||||
return fmt.Errorf("stdout is the terminal, please redirect output")
|
||||
}
|
||||
|
||||
tw := tar.NewWriter(os.Stdout)
|
||||
defer tw.Close()
|
||||
|
||||
// If we want to dump "/" we'll need to add the name of the first node, too
|
||||
// as it would get lost otherwise.
|
||||
if rootNode.Path == "/" {
|
||||
rootNode.Path = path.Join(rootNode.Path, rootNode.Name)
|
||||
rootPath = rootNode.Path
|
||||
}
|
||||
|
||||
// we know that rootNode is a folder and walker.Walk will already process
|
||||
// the next node, so we have to tar this one first, too
|
||||
if err := tarNode(ctx, tw, rootNode, repo); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err := walker.Walk(ctx, repo, *rootNode.Subtree, nil, func(_ restic.ID, nodepath string, node *restic.Node, err error) (bool, error) {
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if node == nil {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
node.Path = path.Join(rootPath, nodepath)
|
||||
|
||||
if node.Type == "file" || node.Type == "symlink" || node.Type == "dir" {
|
||||
err := tarNode(ctx, tw, node, repo)
|
||||
if err != err {
|
||||
return false, err
|
||||
}
|
||||
}
|
||||
|
||||
return false, nil
|
||||
})
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func tarNode(ctx context.Context, tw *tar.Writer, node *restic.Node, repo restic.Repository) error {
|
||||
|
||||
header := &tar.Header{
|
||||
Name: node.Path,
|
||||
Size: int64(node.Size),
|
||||
Mode: int64(node.Mode),
|
||||
Uid: int(node.UID),
|
||||
Gid: int(node.GID),
|
||||
ModTime: node.ModTime,
|
||||
AccessTime: node.AccessTime,
|
||||
ChangeTime: node.ChangeTime,
|
||||
PAXRecords: parseXattrs(node.ExtendedAttributes),
|
||||
}
|
||||
|
||||
if node.Type == "symlink" {
|
||||
header.Typeflag = tar.TypeSymlink
|
||||
header.Linkname = node.LinkTarget
|
||||
}
|
||||
|
||||
if node.Type == "dir" {
|
||||
header.Typeflag = tar.TypeDir
|
||||
}
|
||||
|
||||
err := tw.WriteHeader(header)
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "TarHeader ")
|
||||
}
|
||||
|
||||
return getNodeData(ctx, tw, repo, node)
|
||||
|
||||
}
|
||||
|
||||
func parseXattrs(xattrs []restic.ExtendedAttribute) map[string]string {
|
||||
tmpMap := make(map[string]string)
|
||||
|
||||
for _, attr := range xattrs {
|
||||
attrString := string(attr.Value)
|
||||
|
||||
if strings.HasPrefix(attr.Name, "system.posix_acl_") {
|
||||
na := acl{}
|
||||
na.decode(attr.Value)
|
||||
|
||||
if na.String() != "" {
|
||||
if strings.Contains(attr.Name, "system.posix_acl_access") {
|
||||
tmpMap["SCHILY.acl.access"] = na.String()
|
||||
} else if strings.Contains(attr.Name, "system.posix_acl_default") {
|
||||
tmpMap["SCHILY.acl.default"] = na.String()
|
||||
}
|
||||
}
|
||||
|
||||
} else {
|
||||
tmpMap["SCHILY.xattr."+attr.Name] = attrString
|
||||
}
|
||||
}
|
||||
|
||||
return tmpMap
|
||||
}
|
||||
|
||||
@@ -62,7 +62,7 @@ func init() {
|
||||
f.BoolVar(&findOptions.BlobID, "blob", false, "pattern is a blob-ID")
|
||||
f.BoolVar(&findOptions.TreeID, "tree", false, "pattern is a tree-ID")
|
||||
f.BoolVar(&findOptions.PackID, "pack", false, "pattern is a pack-ID")
|
||||
f.BoolVar(&findOptions.ShowPackID, "show-pack-id", false, "display the pack-ID the blobs belong to (with --blob)")
|
||||
f.BoolVar(&findOptions.ShowPackID, "show-pack-id", false, "display the pack-ID the blobs belong to (with --blob or --tree)")
|
||||
f.BoolVarP(&findOptions.CaseInsensitive, "ignore-case", "i", false, "ignore case for pattern")
|
||||
f.BoolVarP(&findOptions.ListLong, "long", "l", false, "use a long listing format showing size and mode")
|
||||
|
||||
@@ -258,9 +258,13 @@ func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error
|
||||
}
|
||||
|
||||
f.out.newsn = sn
|
||||
return walker.Walk(ctx, f.repo, *sn.Tree, f.ignoreTrees, func(_ restic.ID, nodepath string, node *restic.Node, err error) (bool, error) {
|
||||
return walker.Walk(ctx, f.repo, *sn.Tree, f.ignoreTrees, func(parentTreeID restic.ID, nodepath string, node *restic.Node, err error) (bool, error) {
|
||||
if err != nil {
|
||||
return false, err
|
||||
debug.Log("Error loading tree %v: %v", parentTreeID, err)
|
||||
|
||||
Printf("Unable to load tree %s\n ... which belongs to snapshot %s.\n", parentTreeID, sn.ID())
|
||||
|
||||
return false, walker.SkipNode
|
||||
}
|
||||
|
||||
if node == nil {
|
||||
@@ -340,7 +344,11 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
|
||||
f.out.newsn = sn
|
||||
return walker.Walk(ctx, f.repo, *sn.Tree, f.ignoreTrees, func(parentTreeID restic.ID, nodepath string, node *restic.Node, err error) (bool, error) {
|
||||
if err != nil {
|
||||
return false, err
|
||||
debug.Log("Error loading tree %v: %v", parentTreeID, err)
|
||||
|
||||
Printf("Unable to load tree %s\n ... which belongs to snapshot %s.\n", parentTreeID, sn.ID())
|
||||
|
||||
return false, walker.SkipNode
|
||||
}
|
||||
|
||||
if node == nil {
|
||||
@@ -442,27 +450,36 @@ func (f *Finder) packsToBlobs(ctx context.Context, packs []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *Finder) findBlobsPacks(ctx context.Context) {
|
||||
func (f *Finder) findObjectPack(ctx context.Context, id string, t restic.BlobType) {
|
||||
idx := f.repo.Index()
|
||||
|
||||
rid, err := restic.ParseID(id)
|
||||
if err != nil {
|
||||
Printf("Note: cannot find pack for object '%s', unable to parse ID: %v\n", id, err)
|
||||
return
|
||||
}
|
||||
|
||||
blobs, found := idx.Lookup(rid, t)
|
||||
if !found {
|
||||
Printf("Object %s not found in the index\n", rid.Str())
|
||||
return
|
||||
}
|
||||
|
||||
for _, b := range blobs {
|
||||
if b.ID.Equal(rid) {
|
||||
Printf("Object belongs to pack %s\n ... Pack %s: %s\n", b.PackID, b.PackID.Str(), b.String())
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (f *Finder) findObjectsPacks(ctx context.Context) {
|
||||
for i := range f.blobIDs {
|
||||
rid, err := restic.ParseID(i)
|
||||
if err != nil {
|
||||
Printf("Note: cannot find pack for blob '%s', unable to parse ID: %v\n", i, err)
|
||||
continue
|
||||
}
|
||||
f.findObjectPack(ctx, i, restic.DataBlob)
|
||||
}
|
||||
|
||||
blobs, found := idx.Lookup(rid, restic.DataBlob)
|
||||
if !found {
|
||||
Printf("Blob %s not found in the index\n", rid.Str())
|
||||
continue
|
||||
}
|
||||
|
||||
for _, b := range blobs {
|
||||
if b.ID.Equal(rid) {
|
||||
Printf("Blob belongs to pack %s\n ... Pack %s: %s\n", b.PackID, b.PackID.Str(), b.String())
|
||||
break
|
||||
}
|
||||
}
|
||||
for i := range f.treeIDs {
|
||||
f.findObjectPack(ctx, i, restic.TreeBlob)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -557,8 +574,8 @@ func runFind(opts FindOptions, gopts GlobalOptions, args []string) error {
|
||||
}
|
||||
f.out.Finish()
|
||||
|
||||
if opts.ShowPackID && f.blobIDs != nil {
|
||||
f.findBlobsPacks(ctx)
|
||||
if opts.ShowPackID && (f.blobIDs != nil || f.treeIDs != nil) {
|
||||
f.findObjectsPacks(ctx)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
@@ -3,10 +3,8 @@ package main
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"sort"
|
||||
"strings"
|
||||
"io"
|
||||
|
||||
"github.com/restic/restic/internal/errors"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
@@ -90,153 +88,129 @@ func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// group by hostname and dirs
|
||||
type key struct {
|
||||
Hostname string
|
||||
Paths []string
|
||||
Tags []string
|
||||
}
|
||||
snapshotGroups := make(map[string]restic.Snapshots)
|
||||
|
||||
var GroupByTag bool
|
||||
var GroupByHost bool
|
||||
var GroupByPath bool
|
||||
var GroupOptionList []string
|
||||
|
||||
GroupOptionList = strings.Split(opts.GroupBy, ",")
|
||||
|
||||
for _, option := range GroupOptionList {
|
||||
switch option {
|
||||
case "host":
|
||||
GroupByHost = true
|
||||
case "paths":
|
||||
GroupByPath = true
|
||||
case "tags":
|
||||
GroupByTag = true
|
||||
case "":
|
||||
default:
|
||||
return errors.Fatal("unknown grouping option: '" + option + "'")
|
||||
}
|
||||
}
|
||||
|
||||
removeSnapshots := 0
|
||||
|
||||
ctx, cancel := context.WithCancel(gopts.ctx)
|
||||
defer cancel()
|
||||
|
||||
var snapshots restic.Snapshots
|
||||
|
||||
for sn := range FindFilteredSnapshots(ctx, repo, opts.Host, opts.Tags, opts.Paths, args) {
|
||||
if len(args) > 0 {
|
||||
// When explicit snapshots args are given, remove them immediately.
|
||||
snapshots = append(snapshots, sn)
|
||||
}
|
||||
|
||||
if len(args) > 0 {
|
||||
// When explicit snapshots args are given, remove them immediately.
|
||||
for _, sn := range snapshots {
|
||||
if !opts.DryRun {
|
||||
h := restic.Handle{Type: restic.SnapshotFile, Name: sn.ID().String()}
|
||||
if err = repo.Backend().Remove(gopts.ctx, h); err != nil {
|
||||
return err
|
||||
}
|
||||
Verbosef("removed snapshot %v\n", sn.ID().Str())
|
||||
if !gopts.JSON {
|
||||
Verbosef("removed snapshot %v\n", sn.ID().Str())
|
||||
}
|
||||
removeSnapshots++
|
||||
} else {
|
||||
Verbosef("would have removed snapshot %v\n", sn.ID().Str())
|
||||
if !gopts.JSON {
|
||||
Verbosef("would have removed snapshot %v\n", sn.ID().Str())
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Determining grouping-keys
|
||||
var tags []string
|
||||
var hostname string
|
||||
var paths []string
|
||||
|
||||
if GroupByTag {
|
||||
tags = sn.Tags
|
||||
sort.StringSlice(tags).Sort()
|
||||
}
|
||||
if GroupByHost {
|
||||
hostname = sn.Hostname
|
||||
}
|
||||
if GroupByPath {
|
||||
paths = sn.Paths
|
||||
}
|
||||
|
||||
sort.StringSlice(sn.Paths).Sort()
|
||||
var k []byte
|
||||
var err error
|
||||
|
||||
k, err = json.Marshal(key{Tags: tags, Hostname: hostname, Paths: paths})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
snapshotGroups[string(k)] = append(snapshotGroups[string(k)], sn)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
snapshotGroups, _, err := restic.GroupSnapshots(snapshots, opts.GroupBy)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
policy := restic.ExpirePolicy{
|
||||
Last: opts.Last,
|
||||
Hourly: opts.Hourly,
|
||||
Daily: opts.Daily,
|
||||
Weekly: opts.Weekly,
|
||||
Monthly: opts.Monthly,
|
||||
Yearly: opts.Yearly,
|
||||
Within: opts.Within,
|
||||
Tags: opts.KeepTags,
|
||||
}
|
||||
policy := restic.ExpirePolicy{
|
||||
Last: opts.Last,
|
||||
Hourly: opts.Hourly,
|
||||
Daily: opts.Daily,
|
||||
Weekly: opts.Weekly,
|
||||
Monthly: opts.Monthly,
|
||||
Yearly: opts.Yearly,
|
||||
Within: opts.Within,
|
||||
Tags: opts.KeepTags,
|
||||
}
|
||||
|
||||
if policy.Empty() && len(args) == 0 {
|
||||
Verbosef("no policy was specified, no snapshots will be removed\n")
|
||||
}
|
||||
if policy.Empty() && len(args) == 0 {
|
||||
if !gopts.JSON {
|
||||
Verbosef("no policy was specified, no snapshots will be removed\n")
|
||||
}
|
||||
}
|
||||
|
||||
if !policy.Empty() {
|
||||
Verbosef("Applying Policy: %v\n", policy)
|
||||
|
||||
for k, snapshotGroup := range snapshotGroups {
|
||||
var key key
|
||||
if json.Unmarshal([]byte(k), &key) != nil {
|
||||
return err
|
||||
if !policy.Empty() {
|
||||
if !gopts.JSON {
|
||||
Verbosef("Applying Policy: %v\n", policy)
|
||||
}
|
||||
|
||||
// Info
|
||||
Verbosef("snapshots")
|
||||
var infoStrings []string
|
||||
if GroupByTag {
|
||||
infoStrings = append(infoStrings, "tags ["+strings.Join(key.Tags, ", ")+"]")
|
||||
}
|
||||
if GroupByHost {
|
||||
infoStrings = append(infoStrings, "host ["+key.Hostname+"]")
|
||||
}
|
||||
if GroupByPath {
|
||||
infoStrings = append(infoStrings, "paths ["+strings.Join(key.Paths, ", ")+"]")
|
||||
}
|
||||
if infoStrings != nil {
|
||||
Verbosef(" for (" + strings.Join(infoStrings, ", ") + ")")
|
||||
}
|
||||
Verbosef(":\n\n")
|
||||
var jsonGroups []*ForgetGroup
|
||||
|
||||
keep, remove, reasons := restic.ApplyPolicy(snapshotGroup, policy)
|
||||
|
||||
if len(keep) != 0 && !gopts.Quiet {
|
||||
Printf("keep %d snapshots:\n", len(keep))
|
||||
PrintSnapshots(globalOptions.stdout, keep, reasons, opts.Compact)
|
||||
Printf("\n")
|
||||
}
|
||||
|
||||
if len(remove) != 0 && !gopts.Quiet {
|
||||
Printf("remove %d snapshots:\n", len(remove))
|
||||
PrintSnapshots(globalOptions.stdout, remove, nil, opts.Compact)
|
||||
Printf("\n")
|
||||
}
|
||||
|
||||
removeSnapshots += len(remove)
|
||||
|
||||
if !opts.DryRun {
|
||||
for _, sn := range remove {
|
||||
h := restic.Handle{Type: restic.SnapshotFile, Name: sn.ID().String()}
|
||||
err = repo.Backend().Remove(gopts.ctx, h)
|
||||
for k, snapshotGroup := range snapshotGroups {
|
||||
if gopts.Verbose >= 1 && !gopts.JSON {
|
||||
err = PrintSnapshotGroupHeader(gopts.stdout, k)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
var key restic.SnapshotGroupKey
|
||||
if json.Unmarshal([]byte(k), &key) != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var fg ForgetGroup
|
||||
fg.Tags = key.Tags
|
||||
fg.Host = key.Hostname
|
||||
fg.Paths = key.Paths
|
||||
|
||||
keep, remove, reasons := restic.ApplyPolicy(snapshotGroup, policy)
|
||||
|
||||
if len(keep) != 0 && !gopts.Quiet && !gopts.JSON {
|
||||
Printf("keep %d snapshots:\n", len(keep))
|
||||
PrintSnapshots(globalOptions.stdout, keep, reasons, opts.Compact)
|
||||
Printf("\n")
|
||||
}
|
||||
addJSONSnapshots(&fg.Keep, keep)
|
||||
|
||||
if len(remove) != 0 && !gopts.Quiet && !gopts.JSON {
|
||||
Printf("remove %d snapshots:\n", len(remove))
|
||||
PrintSnapshots(globalOptions.stdout, remove, nil, opts.Compact)
|
||||
Printf("\n")
|
||||
}
|
||||
addJSONSnapshots(&fg.Remove, remove)
|
||||
|
||||
fg.Reasons = reasons
|
||||
|
||||
jsonGroups = append(jsonGroups, &fg)
|
||||
|
||||
removeSnapshots += len(remove)
|
||||
|
||||
if !opts.DryRun {
|
||||
for _, sn := range remove {
|
||||
h := restic.Handle{Type: restic.SnapshotFile, Name: sn.ID().String()}
|
||||
err = repo.Backend().Remove(gopts.ctx, h)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if gopts.JSON {
|
||||
err = printJSONForget(gopts.stdout, jsonGroups)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if removeSnapshots > 0 && opts.Prune {
|
||||
Verbosef("%d snapshots have been removed, running prune\n", removeSnapshots)
|
||||
if !gopts.JSON {
|
||||
Verbosef("%d snapshots have been removed, running prune\n", removeSnapshots)
|
||||
}
|
||||
if !opts.DryRun {
|
||||
return pruneRepository(gopts, repo)
|
||||
}
|
||||
@@ -244,3 +218,28 @@ func runForget(opts ForgetOptions, gopts GlobalOptions, args []string) error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ForgetGroup helps to print what is forgotten in JSON.
|
||||
type ForgetGroup struct {
|
||||
Tags []string `json:"tags"`
|
||||
Host string `json:"host"`
|
||||
Paths []string `json:"paths"`
|
||||
Keep []Snapshot `json:"keep"`
|
||||
Remove []Snapshot `json:"remove"`
|
||||
Reasons []restic.KeepReason `json:"reasons"`
|
||||
}
|
||||
|
||||
func addJSONSnapshots(js *[]Snapshot, list restic.Snapshots) {
|
||||
for _, sn := range list {
|
||||
k := Snapshot{
|
||||
Snapshot: sn,
|
||||
ID: sn.ID(),
|
||||
ShortID: sn.ID().Str(),
|
||||
}
|
||||
*js = append(*js, k)
|
||||
}
|
||||
}
|
||||
|
||||
func printJSONForget(stdout io.Writer, forgets []*ForgetGroup) error {
|
||||
return json.NewEncoder(stdout).Encode(forgets)
|
||||
}
|
||||
|
||||
@@ -12,7 +12,7 @@ var cmdGenerate = &cobra.Command{
|
||||
Use: "generate [command]",
|
||||
Short: "Generate manual pages and auto-completion files (bash, zsh)",
|
||||
Long: `
|
||||
The "generate" command writes automatically generated files like the man pages
|
||||
The "generate" command writes automatically generated files (like the man pages
|
||||
and the auto-completion files for bash and zsh).
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
|
||||
@@ -149,7 +149,7 @@ func mount(opts MountOptions, gopts GlobalOptions, mountpoint string) error {
|
||||
}
|
||||
|
||||
Printf("Now serving the repository at %s\n", mountpoint)
|
||||
Printf("Don't forget to umount after quitting!\n")
|
||||
Printf("When finished, quit with Ctrl-c or umount the mountpoint.\n")
|
||||
|
||||
debug.Log("serving mount at %v", mountpoint)
|
||||
err = fs.Serve(c, root)
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"github.com/restic/restic/internal/filter"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"github.com/restic/restic/internal/restorer"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
@@ -28,13 +29,15 @@ repository.
|
||||
|
||||
// RestoreOptions collects all options for the restore command.
|
||||
type RestoreOptions struct {
|
||||
Exclude []string
|
||||
Include []string
|
||||
Target string
|
||||
Host string
|
||||
Paths []string
|
||||
Tags restic.TagLists
|
||||
Verify bool
|
||||
Exclude []string
|
||||
InsensitiveExclude []string
|
||||
Include []string
|
||||
InsensitiveInclude []string
|
||||
Target string
|
||||
Host string
|
||||
Paths []string
|
||||
Tags restic.TagLists
|
||||
Verify bool
|
||||
}
|
||||
|
||||
var restoreOptions RestoreOptions
|
||||
@@ -44,7 +47,9 @@ func init() {
|
||||
|
||||
flags := cmdRestore.Flags()
|
||||
flags.StringArrayVarP(&restoreOptions.Exclude, "exclude", "e", nil, "exclude a `pattern` (can be specified multiple times)")
|
||||
flags.StringArrayVar(&restoreOptions.InsensitiveExclude, "iexclude", nil, "same as `--exclude` but ignores the casing of filenames")
|
||||
flags.StringArrayVarP(&restoreOptions.Include, "include", "i", nil, "include a `pattern`, exclude everything else (can be specified multiple times)")
|
||||
flags.StringArrayVar(&restoreOptions.InsensitiveInclude, "iinclude", nil, "same as `--include` but ignores the casing of filenames")
|
||||
flags.StringVarP(&restoreOptions.Target, "target", "t", "", "directory to extract data to")
|
||||
|
||||
flags.StringVarP(&restoreOptions.Host, "host", "H", "", `only consider snapshots for this host when the snapshot ID is "latest"`)
|
||||
@@ -55,6 +60,16 @@ func init() {
|
||||
|
||||
func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
|
||||
ctx := gopts.ctx
|
||||
hasExcludes := len(opts.Exclude) > 0 || len(opts.InsensitiveExclude) > 0
|
||||
hasIncludes := len(opts.Include) > 0 || len(opts.InsensitiveInclude) > 0
|
||||
|
||||
for i, str := range opts.InsensitiveExclude {
|
||||
opts.InsensitiveExclude[i] = strings.ToLower(str)
|
||||
}
|
||||
|
||||
for i, str := range opts.InsensitiveInclude {
|
||||
opts.InsensitiveInclude[i] = strings.ToLower(str)
|
||||
}
|
||||
|
||||
switch {
|
||||
case len(args) == 0:
|
||||
@@ -67,7 +82,7 @@ func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
|
||||
return errors.Fatal("please specify a directory to restore to (--target)")
|
||||
}
|
||||
|
||||
if len(opts.Exclude) > 0 && len(opts.Include) > 0 {
|
||||
if hasExcludes && hasIncludes {
|
||||
return errors.Fatal("exclude and include patterns are mutually exclusive")
|
||||
}
|
||||
|
||||
@@ -125,11 +140,16 @@ func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
|
||||
Warnf("error for exclude pattern: %v", err)
|
||||
}
|
||||
|
||||
matchedInsensitive, _, err := filter.List(opts.InsensitiveExclude, strings.ToLower(item))
|
||||
if err != nil {
|
||||
Warnf("error for iexclude pattern: %v", err)
|
||||
}
|
||||
|
||||
// An exclude filter is basically a 'wildcard but foo',
|
||||
// so even if a childMayMatch, other children of a dir may not,
|
||||
// therefore childMayMatch does not matter, but we should not go down
|
||||
// unless the dir is selected for restore
|
||||
selectedForRestore = !matched
|
||||
selectedForRestore = !matched && !matchedInsensitive
|
||||
childMayBeSelected = selectedForRestore && node.Type == "dir"
|
||||
|
||||
return selectedForRestore, childMayBeSelected
|
||||
@@ -141,15 +161,20 @@ func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
|
||||
Warnf("error for include pattern: %v", err)
|
||||
}
|
||||
|
||||
selectedForRestore = matched
|
||||
childMayBeSelected = childMayMatch && node.Type == "dir"
|
||||
matchedInsensitive, childMayMatchInsensitive, err := filter.List(opts.InsensitiveInclude, strings.ToLower(item))
|
||||
if err != nil {
|
||||
Warnf("error for iexclude pattern: %v", err)
|
||||
}
|
||||
|
||||
selectedForRestore = matched || matchedInsensitive
|
||||
childMayBeSelected = (childMayMatch || childMayMatchInsensitive) && node.Type == "dir"
|
||||
|
||||
return selectedForRestore, childMayBeSelected
|
||||
}
|
||||
|
||||
if len(opts.Exclude) > 0 {
|
||||
if hasExcludes {
|
||||
res.SelectFilter = selectExcludeFilter
|
||||
} else if len(opts.Include) > 0 {
|
||||
} else if hasIncludes {
|
||||
res.SelectFilter = selectIncludeFilter
|
||||
}
|
||||
|
||||
|
||||
@@ -32,6 +32,7 @@ type SnapshotOptions struct {
|
||||
Paths []string
|
||||
Compact bool
|
||||
Last bool
|
||||
GroupBy string
|
||||
}
|
||||
|
||||
var snapshotOptions SnapshotOptions
|
||||
@@ -45,6 +46,7 @@ func init() {
|
||||
f.StringArrayVar(&snapshotOptions.Paths, "path", nil, "only consider snapshots for this `path` (can be specified multiple times)")
|
||||
f.BoolVarP(&snapshotOptions.Compact, "compact", "c", false, "use compact format")
|
||||
f.BoolVar(&snapshotOptions.Last, "last", false, "only show the last snapshot for each host and path")
|
||||
f.StringVarP(&snapshotOptions.GroupBy, "group-by", "g", "", "string for grouping snapshots by host,paths,tags")
|
||||
}
|
||||
|
||||
func runSnapshots(opts SnapshotOptions, gopts GlobalOptions, args []string) error {
|
||||
@@ -64,25 +66,41 @@ func runSnapshots(opts SnapshotOptions, gopts GlobalOptions, args []string) erro
|
||||
ctx, cancel := context.WithCancel(gopts.ctx)
|
||||
defer cancel()
|
||||
|
||||
var list restic.Snapshots
|
||||
var snapshots restic.Snapshots
|
||||
for sn := range FindFilteredSnapshots(ctx, repo, opts.Host, opts.Tags, opts.Paths, args) {
|
||||
list = append(list, sn)
|
||||
snapshots = append(snapshots, sn)
|
||||
}
|
||||
snapshotGroups, grouped, err := restic.GroupSnapshots(snapshots, opts.GroupBy)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if opts.Last {
|
||||
list = FilterLastSnapshots(list)
|
||||
for k, list := range snapshotGroups {
|
||||
if opts.Last {
|
||||
list = FilterLastSnapshots(list)
|
||||
}
|
||||
sort.Sort(sort.Reverse(list))
|
||||
snapshotGroups[k] = list
|
||||
}
|
||||
|
||||
sort.Sort(sort.Reverse(list))
|
||||
|
||||
if gopts.JSON {
|
||||
err := printSnapshotsJSON(gopts.stdout, list)
|
||||
err := printSnapshotGroupJSON(gopts.stdout, snapshotGroups, grouped)
|
||||
if err != nil {
|
||||
Warnf("error printing snapshot: %v\n", err)
|
||||
Warnf("error printing snapshots: %v\n", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
PrintSnapshots(gopts.stdout, list, nil, opts.Compact)
|
||||
|
||||
for k, list := range snapshotGroups {
|
||||
if grouped {
|
||||
err := PrintSnapshotGroupHeader(gopts.stdout, k)
|
||||
if err != nil {
|
||||
Warnf("error printing snapshots: %v\n", err)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
PrintSnapshots(gopts.stdout, list, nil, opts.Compact)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -223,6 +241,42 @@ func PrintSnapshots(stdout io.Writer, list restic.Snapshots, reasons []restic.Ke
|
||||
tab.Write(stdout)
|
||||
}
|
||||
|
||||
// PrintSnapshotGroupHeader prints which group of the group-by option the
|
||||
// following snapshots belong to.
|
||||
// Prints nothing, if we did not group at all.
|
||||
func PrintSnapshotGroupHeader(stdout io.Writer, groupKeyJSON string) error {
|
||||
var key restic.SnapshotGroupKey
|
||||
var err error
|
||||
|
||||
err = json.Unmarshal([]byte(groupKeyJSON), &key)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if key.Hostname == "" && key.Tags == nil && key.Paths == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Info
|
||||
fmt.Fprintf(stdout, "snapshots")
|
||||
var infoStrings []string
|
||||
if key.Hostname != "" {
|
||||
infoStrings = append(infoStrings, "host ["+key.Hostname+"]")
|
||||
}
|
||||
if key.Tags != nil {
|
||||
infoStrings = append(infoStrings, "tags ["+strings.Join(key.Tags, ", ")+"]")
|
||||
}
|
||||
if key.Paths != nil {
|
||||
infoStrings = append(infoStrings, "paths ["+strings.Join(key.Paths, ", ")+"]")
|
||||
}
|
||||
if infoStrings != nil {
|
||||
fmt.Fprintf(stdout, " for (%s)", strings.Join(infoStrings, ", "))
|
||||
}
|
||||
fmt.Fprintf(stdout, ":\n")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Snapshot helps to print Snaphots as JSON with their ID included.
|
||||
type Snapshot struct {
|
||||
*restic.Snapshot
|
||||
@@ -231,19 +285,58 @@ type Snapshot struct {
|
||||
ShortID string `json:"short_id"`
|
||||
}
|
||||
|
||||
// printSnapshotsJSON writes the JSON representation of list to stdout.
|
||||
func printSnapshotsJSON(stdout io.Writer, list restic.Snapshots) error {
|
||||
// SnapshotGroup helps to print SnaphotGroups as JSON with their GroupReasons included.
|
||||
type SnapshotGroup struct {
|
||||
GroupKey restic.SnapshotGroupKey `json:"group_key"`
|
||||
Snapshots []Snapshot `json:"snapshots"`
|
||||
}
|
||||
|
||||
// printSnapshotsJSON writes the JSON representation of list to stdout.
|
||||
func printSnapshotGroupJSON(stdout io.Writer, snGroups map[string]restic.Snapshots, grouped bool) error {
|
||||
if grouped {
|
||||
var snapshotGroups []SnapshotGroup
|
||||
|
||||
for k, list := range snGroups {
|
||||
var key restic.SnapshotGroupKey
|
||||
var err error
|
||||
var snapshots []Snapshot
|
||||
|
||||
err = json.Unmarshal([]byte(k), &key)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, sn := range list {
|
||||
k := Snapshot{
|
||||
Snapshot: sn,
|
||||
ID: sn.ID(),
|
||||
ShortID: sn.ID().Str(),
|
||||
}
|
||||
snapshots = append(snapshots, k)
|
||||
}
|
||||
|
||||
group := SnapshotGroup{
|
||||
GroupKey: key,
|
||||
Snapshots: snapshots,
|
||||
}
|
||||
snapshotGroups = append(snapshotGroups, group)
|
||||
}
|
||||
|
||||
return json.NewEncoder(stdout).Encode(snapshotGroups)
|
||||
}
|
||||
|
||||
// Old behavior
|
||||
var snapshots []Snapshot
|
||||
|
||||
for _, sn := range list {
|
||||
|
||||
k := Snapshot{
|
||||
Snapshot: sn,
|
||||
ID: sn.ID(),
|
||||
ShortID: sn.ID().Str(),
|
||||
for _, list := range snGroups {
|
||||
for _, sn := range list {
|
||||
k := Snapshot{
|
||||
Snapshot: sn,
|
||||
ID: sn.ID(),
|
||||
ShortID: sn.ID().Str(),
|
||||
}
|
||||
snapshots = append(snapshots, k)
|
||||
}
|
||||
snapshots = append(snapshots, k)
|
||||
}
|
||||
|
||||
return json.NewEncoder(stdout).Encode(snapshots)
|
||||
|
||||
@@ -36,7 +36,8 @@ The modes are:
|
||||
* raw-data: Counts the size of blobs in the repository, regardless of
|
||||
how many files reference them.
|
||||
* blobs-per-file: A combination of files-by-contents and raw-data.
|
||||
* Refer to the online manual for more details about each mode.
|
||||
|
||||
Refer to the online manual for more details about each mode.
|
||||
`,
|
||||
DisableAutoGenTag: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
|
||||
@@ -88,6 +88,18 @@ func rejectByPattern(patterns []string) RejectByNameFunc {
|
||||
}
|
||||
}
|
||||
|
||||
// Same as `rejectByPattern` but case insensitive.
|
||||
func rejectByInsensitivePattern(patterns []string) RejectByNameFunc {
|
||||
for index, path := range patterns {
|
||||
patterns[index] = strings.ToLower(path)
|
||||
}
|
||||
|
||||
rejFunc := rejectByPattern(patterns)
|
||||
return func(item string) bool {
|
||||
return rejFunc(strings.ToLower(item))
|
||||
}
|
||||
}
|
||||
|
||||
// rejectIfPresent returns a RejectByNameFunc which itself returns whether a path
|
||||
// should be excluded. The RejectByNameFunc considers a file to be excluded when
|
||||
// it resides in a directory with an exclusion file, that is specified by
|
||||
|
||||
@@ -36,6 +36,33 @@ func TestRejectByPattern(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRejectByInsensitivePattern(t *testing.T) {
|
||||
var tests = []struct {
|
||||
filename string
|
||||
reject bool
|
||||
}{
|
||||
{filename: "/home/user/foo.GO", reject: true},
|
||||
{filename: "/home/user/foo.c", reject: false},
|
||||
{filename: "/home/user/foobar", reject: false},
|
||||
{filename: "/home/user/FOObar/x", reject: true},
|
||||
{filename: "/home/user/README", reject: false},
|
||||
{filename: "/home/user/readme.md", reject: true},
|
||||
}
|
||||
|
||||
patterns := []string{"*.go", "README.md", "/home/user/foobar/*"}
|
||||
|
||||
for _, tc := range tests {
|
||||
t.Run("", func(t *testing.T) {
|
||||
reject := rejectByInsensitivePattern(patterns)
|
||||
res := reject(tc.filename)
|
||||
if res != tc.reject {
|
||||
t.Fatalf("wrong result for filename %v: want %v, got %v",
|
||||
tc.filename, tc.reject, res)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsExcludedByFile(t *testing.T) {
|
||||
const (
|
||||
tagFilename = "CACHEDIR.TAG"
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -37,7 +38,7 @@ import (
|
||||
"os/exec"
|
||||
)
|
||||
|
||||
var version = "0.9.4"
|
||||
var version = "0.9.5"
|
||||
|
||||
// TimeFormat is the format used for all timestamps printed by restic.
|
||||
const TimeFormat = "2006-01-02 15:04:05"
|
||||
@@ -273,15 +274,10 @@ func resolvePassword(opts GlobalOptions) (string, error) {
|
||||
|
||||
// readPassword reads the password from the given reader directly.
|
||||
func readPassword(in io.Reader) (password string, err error) {
|
||||
buf := make([]byte, 1000)
|
||||
n, err := io.ReadFull(in, buf)
|
||||
buf = buf[:n]
|
||||
sc := bufio.NewScanner(in)
|
||||
sc.Scan()
|
||||
|
||||
if err != nil && errors.Cause(err) != io.ErrUnexpectedEOF {
|
||||
return "", errors.Wrap(err, "ReadFull")
|
||||
}
|
||||
|
||||
return strings.TrimRight(string(buf), "\r\n"), nil
|
||||
return sc.Text(), errors.Wrap(err, "Scan")
|
||||
}
|
||||
|
||||
// readPasswordTerminal reads the password from the given reader which must be a
|
||||
@@ -336,13 +332,15 @@ func ReadPasswordTwice(gopts GlobalOptions, prompt1, prompt2 string) (string, er
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
pw2, err := ReadPassword(gopts, prompt2)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if stdinIsTerminal() {
|
||||
pw2, err := ReadPassword(gopts, prompt2)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if pw1 != pw2 {
|
||||
return "", errors.Fatal("passwords do not match")
|
||||
if pw1 != pw2 {
|
||||
return "", errors.Fatal("passwords do not match")
|
||||
}
|
||||
}
|
||||
|
||||
return pw1, nil
|
||||
@@ -377,7 +375,7 @@ func OpenRepository(opts GlobalOptions) (*repository.Repository, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if stdoutIsTerminal() {
|
||||
if stdoutIsTerminal() && !opts.JSON {
|
||||
id := s.Config().ID
|
||||
if len(id) > 8 {
|
||||
id = id[:8]
|
||||
|
||||
@@ -219,6 +219,35 @@ func testRunForget(t testing.TB, gopts GlobalOptions, args ...string) {
|
||||
rtest.OK(t, runForget(opts, gopts, args))
|
||||
}
|
||||
|
||||
func testRunForgetJSON(t testing.TB, gopts GlobalOptions, args ...string) {
|
||||
buf := bytes.NewBuffer(nil)
|
||||
oldJSON := gopts.JSON
|
||||
gopts.stdout = buf
|
||||
gopts.JSON = true
|
||||
defer func() {
|
||||
gopts.stdout = os.Stdout
|
||||
gopts.JSON = oldJSON
|
||||
}()
|
||||
|
||||
opts := ForgetOptions{
|
||||
DryRun: true,
|
||||
Last: 1,
|
||||
}
|
||||
|
||||
rtest.OK(t, runForget(opts, gopts, args))
|
||||
|
||||
var forgets []*ForgetGroup
|
||||
rtest.OK(t, json.Unmarshal(buf.Bytes(), &forgets))
|
||||
|
||||
rtest.Assert(t, len(forgets) == 1,
|
||||
"Expected 1 snapshot group, got %v", len(forgets))
|
||||
rtest.Assert(t, len(forgets[0].Keep) == 1,
|
||||
"Expected 1 snapshot to be kept, got %v", len(forgets[0].Keep))
|
||||
rtest.Assert(t, len(forgets[0].Remove) == 2,
|
||||
"Expected 2 snapshots to be removed, got %v", len(forgets[0].Remove))
|
||||
return
|
||||
}
|
||||
|
||||
func testRunPrune(t testing.TB, gopts GlobalOptions) {
|
||||
rtest.OK(t, runPrune(gopts))
|
||||
}
|
||||
@@ -1051,6 +1080,7 @@ func TestPrune(t *testing.T) {
|
||||
rtest.Assert(t, len(snapshotIDs) == 3,
|
||||
"expected 3 snapshot, got %v", snapshotIDs)
|
||||
|
||||
testRunForgetJSON(t, env.gopts)
|
||||
testRunForget(t, env.gopts, firstSnapshot[0].String())
|
||||
testRunPrune(t, env.gopts)
|
||||
testRunCheck(t, env.gopts)
|
||||
|
||||
@@ -78,6 +78,12 @@ If you are using macOS, you can install restic using the
|
||||
|
||||
$ brew install restic
|
||||
|
||||
You may also install it using `MacPorts <https://www.macports.org/>`__:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo port install restic
|
||||
|
||||
Nix & NixOS
|
||||
===========
|
||||
|
||||
@@ -168,28 +174,28 @@ There's both pre-compiled binaries for different platforms as well as the source
|
||||
code available for download. Just download and run the one matching your system.
|
||||
|
||||
The official binaries can be updated in place using the ``restic self-update``
|
||||
command:
|
||||
command (needs restic 0.9.3 or later):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic version
|
||||
restic 0.9.1 compiled with go1.10.3 on linux/amd64
|
||||
restic 0.9.3 compiled with go1.11.2 on linux/amd64
|
||||
|
||||
$ restic self-update
|
||||
find latest release of restic at GitHub
|
||||
latest version is 0.9.2
|
||||
latest version is 0.9.4
|
||||
download file SHA256SUMS
|
||||
download SHA256SUMS
|
||||
download file SHA256SUMS
|
||||
download SHA256SUMS.asc
|
||||
GPG signature verification succeeded
|
||||
download restic_0.9.2_linux_amd64.bz2
|
||||
downloaded restic_0.9.2_linux_amd64.bz2
|
||||
download restic_0.9.4_linux_amd64.bz2
|
||||
downloaded restic_0.9.4_linux_amd64.bz2
|
||||
saved 12115904 bytes in ./restic
|
||||
successfully updated restic to version 0.9.2
|
||||
successfully updated restic to version 0.9.4
|
||||
|
||||
$ restic version
|
||||
restic 0.9.2 compiled with go1.10.3 on linux/amd64
|
||||
restic 0.9.4 compiled with go1.12.1 on linux/amd64
|
||||
|
||||
The ``self-update`` command uses the GPG signature on the files uploaded to
|
||||
GitHub to verify their authenticity. No external programs are necessary.
|
||||
|
||||
@@ -122,7 +122,17 @@ Last, if you'd like to use an entirely different program to create the
|
||||
SFTP connection, you can specify the command to be run with the option
|
||||
``-o sftp.command="foobar"``.
|
||||
|
||||
.. note:: Please be aware that sftp servers close connections when no data is
|
||||
received by the client. This can happen when restic is processing huge
|
||||
amounts of unchanged data. To avoid this issue add the following lines
|
||||
to the client’s .ssh/config file:
|
||||
|
||||
::
|
||||
|
||||
ServerAliveInterval 60
|
||||
ServerAliveCountMax 240
|
||||
|
||||
|
||||
REST Server
|
||||
***********
|
||||
|
||||
@@ -268,6 +278,18 @@ the naming convention of those variables follows the official Python Swift clien
|
||||
$ export OS_PROJECT_NAME=<MY_PROJECT_NAME>
|
||||
$ export OS_PROJECT_DOMAIN_NAME=<MY_PROJECT_DOMAIN_NAME>
|
||||
|
||||
# For keystone v3 application credential authentication (application credential id)
|
||||
$ export OS_AUTH_URL=<MY_AUTH_URL>
|
||||
$ export OS_APPLICATION_CREDENTIAL_ID=<MY_APPLICATION_CREDENTIAL_ID>
|
||||
$ export OS_APPLICATION_CREDENTIAL_SECRET=<MY_APPLICATION_CREDENTIAL_SECRET>
|
||||
|
||||
# For keystone v3 application credential authentication (application credential name)
|
||||
$ export OS_AUTH_URL=<MY_AUTH_URL>
|
||||
$ export OS_USERNAME=<MY_USERNAME>
|
||||
$ export OS_USER_DOMAIN_NAME=<MY_DOMAIN_NAME>
|
||||
$ export OS_APPLICATION_CREDENTIAL_NAME=<MY_APPLICATION_CREDENTIAL_NAME>
|
||||
$ export OS_APPLICATION_CREDENTIAL_SECRET=<MY_APPLICATION_CREDENTIAL_SECRET>
|
||||
|
||||
# For authentication based on tokens
|
||||
$ export OS_STORAGE_URL=<MY_STORAGE_URL>
|
||||
$ export OS_AUTH_TOKEN=<MY_AUTH_TOKEN>
|
||||
@@ -302,7 +324,7 @@ Backblaze B2
|
||||
|
||||
Restic can backup data to any Backblaze B2 bucket. You need to first setup the
|
||||
following environment variables with the credentials you can find in the
|
||||
dashboard in on the "Buckets" page when signed into your B2 account:
|
||||
dashboard on the "Buckets" page when signed into your B2 account:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@@ -520,7 +542,7 @@ interaction. If you use emulation environments like
|
||||
``Mintty`` or ``rxvt``, you may get a password error.
|
||||
|
||||
You can workaround this by using a special tool called ``winpty`` (look
|
||||
`here <https://sourceforge.net/p/msys2/wiki/Porting/>`__ and
|
||||
`here <https://github.com/msys2/msys2/wiki/Porting>`__ and
|
||||
`here <https://github.com/rprichard/winpty>`__ for detail information).
|
||||
On MSYS2, you can install ``winpty`` as follows:
|
||||
|
||||
|
||||
@@ -139,10 +139,10 @@ You can exclude folders and files by specifying exclude patterns, currently
|
||||
the exclude options are:
|
||||
|
||||
- ``--exclude`` Specified one or more times to exclude one or more items
|
||||
- ``--iexclude`` Same as ``--exclude`` but ignores the case of paths
|
||||
- ``--exclude-caches`` Specified once to exclude folders containing a special file
|
||||
- ``--exclude-file`` Specified one or more times to exclude items listed in a given file
|
||||
- ``--exclude-if-present`` Specified one or more times to exclude a folders content
|
||||
if it contains a given file (optionally having a given header)
|
||||
- ``--exclude-if-present foo`` Specified one or more times to exclude a folder's content if it contains a file called ``foo``` (optionally having a given header, no wildcards for the file name supported)
|
||||
|
||||
Let's say we have a file called ``excludes.txt`` with the following content:
|
||||
|
||||
@@ -279,6 +279,10 @@ written, and the next backup needs to write new metadata again. If you really
|
||||
want to save the access time for files and directories, you can pass the
|
||||
``--with-atime`` option to the ``backup`` command.
|
||||
|
||||
In filesystems that do not support inode consistency, like FUSE-based ones and pCloud, it is
|
||||
possible to ignore inode on changed files comparison by passing ``--ignore-inode`` to
|
||||
``backup`` command.
|
||||
|
||||
Reading data from stdin
|
||||
***********************
|
||||
|
||||
@@ -289,6 +293,7 @@ this mode of operation, just supply the option ``--stdin`` to the
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ set -o pipefail
|
||||
$ mysqldump [...] | restic -r /srv/restic-repo backup --stdin
|
||||
|
||||
This creates a new snapshot of the output of ``mysqldump``. You can then
|
||||
@@ -302,6 +307,13 @@ specified with ``--stdin-filename``, e.g. like this:
|
||||
|
||||
$ mysqldump [...] | restic -r /srv/restic-repo backup --stdin --stdin-filename production.sql
|
||||
|
||||
The option ``pipefail`` is highly recommended so that a non-zero exit code from
|
||||
one of the programs in the pipe (e.g. ``mysqldump`` here) makes the whole chain
|
||||
return a non-zero exit code. Refer to the `Use the Unofficial Bash Strict Mode
|
||||
<http://redsymbol.net/articles/unofficial-bash-strict-mode/>`__ for more
|
||||
details on this.
|
||||
|
||||
|
||||
Tags for backup
|
||||
***************
|
||||
|
||||
@@ -360,7 +372,11 @@ environment variables. The following list of environment variables:
|
||||
|
||||
OS_USER_DOMAIN_NAME User domain name for keystone authentication
|
||||
OS_PROJECT_NAME Project name for keystone authentication
|
||||
OS_PROJECT_DOMAIN_NAME PRoject domain name for keystone authentication
|
||||
OS_PROJECT_DOMAIN_NAME Project domain name for keystone authentication
|
||||
|
||||
OS_APPLICATION_CREDENTIAL_ID Application Credential ID (keystone v3)
|
||||
OS_APPLICATION_CREDENTIAL_NAME Application Credential Name (keystone v3)
|
||||
OS_APPLICATION_CREDENTIAL_SECRET Application Credential Secret (keystone v3)
|
||||
|
||||
OS_STORAGE_URL Storage URL for token authentication
|
||||
OS_AUTH_TOKEN Auth token for token authentication
|
||||
|
||||
@@ -56,6 +56,31 @@ Or filter by host:
|
||||
|
||||
Combining filters is also possible.
|
||||
|
||||
Furthermore you can group the output by the same filters (host, paths, tags):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -r /srv/restic-repo snapshots --group-by host
|
||||
|
||||
enter password for repository:
|
||||
snapshots for (host [kasimir])
|
||||
ID Date Host Tags Directory
|
||||
----------------------------------------------------------------------
|
||||
40dc1520 2015-05-08 21:38:30 kasimir /home/user/work
|
||||
79766175 2015-05-08 21:40:19 kasimir /home/user/work
|
||||
2 snapshots
|
||||
snapshots for (host [luigi])
|
||||
ID Date Host Tags Directory
|
||||
----------------------------------------------------------------------
|
||||
bdbd3439 2015-05-08 21:45:17 luigi /home/art
|
||||
9f0bc19e 2015-05-08 21:46:11 luigi /srv
|
||||
2 snapshots
|
||||
snapshots for (host [kazik])
|
||||
ID Date Host Tags Directory
|
||||
----------------------------------------------------------------------
|
||||
590c8fc8 2015-05-08 21:47:38 kazik /srv
|
||||
1 snapshots
|
||||
|
||||
|
||||
Checking a repo's integrity and consistency
|
||||
===========================================
|
||||
@@ -101,7 +126,7 @@ data files:
|
||||
|
||||
Use ``--read-data-subset=n/t`` parameter to check subset of repository data
|
||||
files. The parameter takes two values, ``n`` and ``t``. All repository data
|
||||
files are logically devided in ``t`` roughly equal groups and only files that
|
||||
files are logically divided in ``t`` roughly equal groups and only files that
|
||||
belong to the group number ``n`` are checked. For example, the following
|
||||
commands check all repository data files over 5 separate invocations:
|
||||
|
||||
|
||||
@@ -52,6 +52,10 @@ You can use the command ``restic ls latest`` or ``restic find foo`` to find the
|
||||
path to the file within the snapshot. This path you can then pass to
|
||||
`--include` in verbatim to only restore the single file or directory.
|
||||
|
||||
There are case insensitive variants of of ``--exclude`` and ``--include`` called
|
||||
``--iexclude`` and ``--iinclude``. These options will behave the same way but
|
||||
ignore the casing of paths.
|
||||
|
||||
Restore using mount
|
||||
===================
|
||||
|
||||
@@ -65,7 +69,7 @@ command to serve the repository with FUSE:
|
||||
$ restic -r /srv/restic-repo mount /mnt/restic
|
||||
enter password for repository:
|
||||
Now serving /srv/restic-repo at /mnt/restic
|
||||
Don't forget to umount after quitting!
|
||||
When finished, quit with Ctrl-c or umount the mountpoint.
|
||||
|
||||
Mounting repositories via FUSE is not possible on OpenBSD, Solaris/illumos
|
||||
and Windows. For Linux, the ``fuse`` kernel module needs to be loaded. For
|
||||
@@ -120,3 +124,13 @@ e.g.:
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -r /srv/restic-repo dump --path /production.sql latest production.sql | mysql
|
||||
|
||||
It is also possible to ``dump`` the contents of a whole folder structure to
|
||||
stdout. To retain the information about the files and folders Restic will
|
||||
output the contents in the tar format:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -r /srv/restic-repo dump /home/other/work latest > restore.tar
|
||||
|
||||
|
||||
|
||||
@@ -210,7 +210,7 @@ all snapshots, use ``--keep-last 1`` and then finally remove the last
|
||||
snapshot ID manually (by passing the ID to ``forget``).
|
||||
|
||||
All snapshots are evaluated against all matching ``--keep-*`` counts. A
|
||||
single snapshot on 2017-09-30 (Sun) will count as a daily, weekly and monthly.
|
||||
single snapshot on 2017-09-30 (Sat) will count as a daily, weekly and monthly.
|
||||
|
||||
Let's explain this with an example: Suppose you have only made a backup
|
||||
on each Sunday for 12 weeks. Then ``forget --keep-daily 4`` will keep
|
||||
|
||||
@@ -277,6 +277,10 @@ _restic_backup()
|
||||
flags+=("--host=")
|
||||
two_word_flags+=("-H")
|
||||
local_nonpersistent_flags+=("--host=")
|
||||
flags+=("--iexclude=")
|
||||
local_nonpersistent_flags+=("--iexclude=")
|
||||
flags+=("--ignore-inode")
|
||||
local_nonpersistent_flags+=("--ignore-inode")
|
||||
flags+=("--one-file-system")
|
||||
flags+=("-x")
|
||||
local_nonpersistent_flags+=("--one-file-system")
|
||||
@@ -1222,6 +1226,10 @@ _restic_restore()
|
||||
flags+=("--host=")
|
||||
two_word_flags+=("-H")
|
||||
local_nonpersistent_flags+=("--host=")
|
||||
flags+=("--iexclude=")
|
||||
local_nonpersistent_flags+=("--iexclude=")
|
||||
flags+=("--iinclude=")
|
||||
local_nonpersistent_flags+=("--iinclude=")
|
||||
flags+=("--include=")
|
||||
two_word_flags+=("-i")
|
||||
local_nonpersistent_flags+=("--include=")
|
||||
@@ -1324,6 +1332,9 @@ _restic_snapshots()
|
||||
flags+=("--compact")
|
||||
flags+=("-c")
|
||||
local_nonpersistent_flags+=("--compact")
|
||||
flags+=("--group-by=")
|
||||
two_word_flags+=("-g")
|
||||
local_nonpersistent_flags+=("--group-by=")
|
||||
flags+=("--help")
|
||||
flags+=("-h")
|
||||
local_nonpersistent_flags+=("--help")
|
||||
|
||||
@@ -26,7 +26,8 @@ given as the arguments.
|
||||
|
||||
.PP
|
||||
\fB\-\-exclude\-caches\fP[=false]
|
||||
excludes cache directories that are marked with a CACHEDIR.TAG file
|
||||
excludes cache directories that are marked with a CACHEDIR.TAG file. See
|
||||
\[la]http://bford.info/cachedir/spec.html\[ra] for the Cache Directory Tagging Standard
|
||||
|
||||
.PP
|
||||
\fB\-\-exclude\-file\fP=[]
|
||||
@@ -52,6 +53,14 @@ given as the arguments.
|
||||
\fB\-H\fP, \fB\-\-host\fP=""
|
||||
set the \fB\fChostname\fR for the snapshot manually. To prevent an expensive rescan use the "parent" flag
|
||||
|
||||
.PP
|
||||
\fB\-\-iexclude\fP=[]
|
||||
same as \fB\fC\-\-exclude\fR but ignores the casing of filenames
|
||||
|
||||
.PP
|
||||
\fB\-\-ignore\-inode\fP[=false]
|
||||
ignore inode number changes when checking for modified files
|
||||
|
||||
.PP
|
||||
\fB\-x\fP, \fB\-\-one\-file\-system\fP[=false]
|
||||
exclude other file systems
|
||||
|
||||
@@ -59,7 +59,7 @@ It can also be used to search for restic blobs or trees for troubleshooting.
|
||||
|
||||
.PP
|
||||
\fB\-\-show\-pack\-id\fP[=false]
|
||||
display the pack\-ID the blobs belong to (with \-\-blob)
|
||||
display the pack\-ID the blobs belong to (with \-\-blob or \-\-tree)
|
||||
|
||||
.PP
|
||||
\fB\-s\fP, \fB\-\-snapshot\fP=[]
|
||||
|
||||
@@ -15,7 +15,7 @@ restic\-generate \- Generate manual pages and auto\-completion files (bash, zsh)
|
||||
|
||||
.SH DESCRIPTION
|
||||
.PP
|
||||
The "generate" command writes automatically generated files like the man pages
|
||||
The "generate" command writes automatically generated files (like the man pages
|
||||
and the auto\-completion files for bash and zsh).
|
||||
|
||||
|
||||
|
||||
@@ -36,6 +36,14 @@ repository.
|
||||
\fB\-H\fP, \fB\-\-host\fP=""
|
||||
only consider snapshots for this host when the snapshot ID is "latest"
|
||||
|
||||
.PP
|
||||
\fB\-\-iexclude\fP=[]
|
||||
same as \fB\fC\-\-exclude\fR but ignores the casing of filenames
|
||||
|
||||
.PP
|
||||
\fB\-\-iinclude\fP=[]
|
||||
same as \fB\fC\-\-include\fR but ignores the casing of filenames
|
||||
|
||||
.PP
|
||||
\fB\-i\fP, \fB\-\-include\fP=[]
|
||||
include a \fB\fCpattern\fR, exclude everything else (can be specified multiple times)
|
||||
|
||||
@@ -23,6 +23,10 @@ The "snapshots" command lists all snapshots stored in the repository.
|
||||
\fB\-c\fP, \fB\-\-compact\fP[=false]
|
||||
use compact format
|
||||
|
||||
.PP
|
||||
\fB\-g\fP, \fB\-\-group\-by\fP=""
|
||||
string for grouping snapshots by host,paths,tags
|
||||
|
||||
.PP
|
||||
\fB\-h\fP, \fB\-\-help\fP[=false]
|
||||
help for snapshots
|
||||
|
||||
@@ -40,11 +40,12 @@ raw\-data: Counts the size of blobs in the repository, regardless of
|
||||
how many files reference them.
|
||||
.IP \(bu 2
|
||||
blobs\-per\-file: A combination of files\-by\-contents and raw\-data.
|
||||
.IP \(bu 2
|
||||
Refer to the online manual for more details about each mode.
|
||||
|
||||
.RE
|
||||
|
||||
.PP
|
||||
Refer to the online manual for more details about each mode.
|
||||
|
||||
|
||||
.SH OPTIONS
|
||||
.PP
|
||||
|
||||
@@ -78,7 +78,7 @@ command:
|
||||
|
||||
Flags:
|
||||
-e, --exclude pattern exclude a pattern (can be specified multiple times)
|
||||
--exclude-caches excludes cache directories that are marked with a CACHEDIR.TAG file
|
||||
--exclude-caches excludes cache directories that are marked with a CACHEDIR.TAG file. See http://bford.info/cachedir/spec.html for the Cache Directory Tagging Standard
|
||||
--exclude-file file read exclude patterns from a file (can be specified multiple times)
|
||||
--exclude-if-present stringArray takes filename[:header], exclude contents of directories containing filename (except filename itself) if header of that file is as provided (can be specified multiple times)
|
||||
--files-from string read the files to backup from file (can be combined with file args/can be specified multiple times)
|
||||
|
||||
67
go.mod
67
go.mod
@@ -2,54 +2,49 @@ module github.com/restic/restic
|
||||
|
||||
require (
|
||||
bazil.org/fuse v0.0.0-20180421153158-65cc252bf669
|
||||
cloud.google.com/go v0.27.0 // indirect
|
||||
github.com/Azure/azure-sdk-for-go v20.1.0+incompatible
|
||||
github.com/Azure/go-autorest v10.15.3+incompatible // indirect
|
||||
github.com/cenkalti/backoff v2.0.0+incompatible
|
||||
cloud.google.com/go v0.36.0 // indirect
|
||||
contrib.go.opencensus.io/exporter/ocagent v0.4.3 // indirect
|
||||
github.com/Azure/azure-sdk-for-go v26.4.0+incompatible
|
||||
github.com/Azure/go-autorest v11.4.0+incompatible // indirect
|
||||
github.com/cenkalti/backoff v2.1.1+incompatible
|
||||
github.com/cpuguy83/go-md2man v1.0.8 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible // indirect
|
||||
github.com/dnaeon/go-vcr v0.0.0-20180814043457-aafff18a5cc2 // indirect
|
||||
github.com/dnaeon/go-vcr v1.0.1 // indirect
|
||||
github.com/elithrar/simple-scrypt v1.3.0
|
||||
github.com/go-ini/ini v1.38.2 // indirect
|
||||
github.com/golang/protobuf v1.2.0 // indirect
|
||||
github.com/go-ini/ini v1.41.0 // indirect
|
||||
github.com/google/go-cmp v0.2.0
|
||||
github.com/gopherjs/gopherjs v0.0.0-20180825215210-0210a2f0f73c // indirect
|
||||
github.com/hashicorp/golang-lru v0.5.0
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181103185306-d547d1d9531e // indirect
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.7.0 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.0.0 // indirect
|
||||
github.com/jtolds/gls v4.2.1+incompatible // indirect
|
||||
github.com/juju/ratelimit v1.0.1
|
||||
github.com/kr/fs v0.1.0 // indirect
|
||||
github.com/kr/pretty v0.1.0 // indirect
|
||||
github.com/kurin/blazer v0.5.1
|
||||
github.com/kurin/blazer v0.5.3
|
||||
github.com/marstr/guid v1.1.0 // indirect
|
||||
github.com/mattn/go-isatty v0.0.4
|
||||
github.com/minio/minio-go v6.0.7+incompatible
|
||||
github.com/mitchellh/go-homedir v1.0.0 // indirect
|
||||
github.com/ncw/swift v1.0.41
|
||||
github.com/pkg/errors v0.8.0
|
||||
github.com/mattn/go-isatty v0.0.7
|
||||
github.com/minio/minio-go v6.0.14+incompatible
|
||||
github.com/mitchellh/go-homedir v1.1.0 // indirect
|
||||
github.com/ncw/swift v1.0.45
|
||||
github.com/pkg/errors v0.8.1
|
||||
github.com/pkg/profile v1.2.1
|
||||
github.com/pkg/sftp v1.8.2
|
||||
github.com/pkg/xattr v0.3.1
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/pkg/sftp v1.10.0
|
||||
github.com/pkg/xattr v0.4.0
|
||||
github.com/restic/chunker v0.2.0
|
||||
github.com/russross/blackfriday v1.5.1 // indirect
|
||||
github.com/satori/go.uuid v1.2.0 // indirect
|
||||
github.com/smartystreets/assertions v0.0.0-20180820201707-7c9eb446e3cf // indirect
|
||||
github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a // indirect
|
||||
github.com/smartystreets/assertions v0.0.0-20190116191733-b6c0e53d7304 // indirect
|
||||
github.com/smartystreets/goconvey v0.0.0-20181108003508-044398e4856c // indirect
|
||||
github.com/spf13/cobra v0.0.3
|
||||
github.com/spf13/pflag v1.0.2
|
||||
github.com/stretchr/testify v1.2.2 // indirect
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f
|
||||
golang.org/x/sys v0.0.0-20180907202204-917fdcba135d
|
||||
golang.org/x/text v0.3.0
|
||||
google.golang.org/api v0.0.0-20180907210053-b609d5e6b7ab
|
||||
google.golang.org/appengine v1.1.0 // indirect
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect
|
||||
gopkg.in/ini.v1 v1.38.2 // indirect
|
||||
github.com/spf13/pflag v1.0.3
|
||||
github.com/stretchr/testify v1.3.0 // indirect
|
||||
go.opencensus.io v0.19.0 // indirect
|
||||
golang.org/x/crypto v0.0.0-20190208162236-193df9c0f06f
|
||||
golang.org/x/net v0.0.0-20190206173232-65e2d4e15006
|
||||
golang.org/x/oauth2 v0.0.0-20190130055435-99b60b757ec1
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4
|
||||
golang.org/x/sys v0.0.0-20190316082340-a2f829d7f35f
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2
|
||||
google.golang.org/api v0.1.0
|
||||
google.golang.org/grpc v1.18.0 // indirect
|
||||
gopkg.in/ini.v1 v1.41.0 // indirect
|
||||
gopkg.in/tomb.v2 v2.0.0-20161208151619-d5d1b5820637
|
||||
gopkg.in/yaml.v2 v2.2.1 // indirect
|
||||
)
|
||||
|
||||
255
go.sum
255
go.sum
@@ -1,108 +1,265 @@
|
||||
bazil.org/fuse v0.0.0-20180421153158-65cc252bf669 h1:FNCRpXiquG1aoyqcIWVFmpTSKVcx2bQD38uZZeGtdlw=
|
||||
bazil.org/fuse v0.0.0-20180421153158-65cc252bf669/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8=
|
||||
cloud.google.com/go v0.27.0 h1:Xa8ZWro6QYKOwDKtxfKsiE0ea2jD39nx32RxtF5RjYE=
|
||||
cloud.google.com/go v0.27.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
github.com/Azure/azure-sdk-for-go v20.1.0+incompatible h1:b8OWFQuH5MPi2LYyAR2Ga+7KVH9ipwiSSSMga04/Urc=
|
||||
github.com/Azure/azure-sdk-for-go v20.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||
github.com/Azure/go-autorest v10.15.3+incompatible h1:nhKI/bvazIs3C3TFGoSqKY6hZ8f5od5mb5/UcS6HVIY=
|
||||
github.com/Azure/go-autorest v10.15.3+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
||||
github.com/cenkalti/backoff v2.0.0+incompatible h1:5IIPUHhlnUZbcHQsQou5k1Tn58nJkeJL9U+ig5CHJbY=
|
||||
github.com/cenkalti/backoff v2.0.0+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
|
||||
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.31.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.36.0 h1:+aCSj7tOo2LODWVEuZDZeGCckdt6MlSF+X/rB3wUiS8=
|
||||
cloud.google.com/go v0.36.0/go.mod h1:RUoy9p/M4ge0HzT8L+SDZ8jg+Q6fth0CiBuhFJpSV40=
|
||||
contrib.go.opencensus.io/exporter/ocagent v0.4.3 h1:QjNm697iO7CZ09IxxSiCUzOhALENIsLsixdPwjV1yGs=
|
||||
contrib.go.opencensus.io/exporter/ocagent v0.4.3/go.mod h1:YuG83h+XWwqWjvCqn7vK4KSyLKhThY3+gNGQ37iS2V0=
|
||||
dmitri.shuralyov.com/app/changes v0.0.0-20180602232624-0a106ad413e3/go.mod h1:Yl+fi1br7+Rr3LqpNJf1/uxUdtRUV+Tnj0o93V2B9MU=
|
||||
dmitri.shuralyov.com/html/belt v0.0.0-20180602232347-f7d459c86be0/go.mod h1:JLBrvjyP0v+ecvNYvCpyZgu5/xkfAUhi6wJj28eUfSU=
|
||||
dmitri.shuralyov.com/service/change v0.0.0-20181023043359-a85b471d5412/go.mod h1:a1inKt/atXimZ4Mv927x+r7UpyzRUf4emIoiiSC2TN4=
|
||||
dmitri.shuralyov.com/state v0.0.0-20180228185332-28bcc343414c/go.mod h1:0PRwlb0D6DFvNNtx+9ybjezNCa8XF0xaYcETyp6rHWU=
|
||||
git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
|
||||
git.apache.org/thrift.git v0.0.0-20181218151757-9b75e4fe745a/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
|
||||
github.com/Azure/azure-sdk-for-go v26.4.0+incompatible h1:ISw3xYFYPGBmcwP7CQjzQDoYhkywcIVfYzo4CHgQzOw=
|
||||
github.com/Azure/azure-sdk-for-go v26.4.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||
github.com/Azure/go-autorest v11.4.0+incompatible h1:z3Yr6KYqs0nhSNwqGXEBpWK977hxVqsLv2n9PVYcixY=
|
||||
github.com/Azure/go-autorest v11.4.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/bradfitz/go-smtpd v0.0.0-20170404230938-deb6d6237625/go.mod h1:HYsPBTaaSFSlLx/70C2HPIMNZpVV8+vt/A+FMnYP11g=
|
||||
github.com/cenkalti/backoff v2.1.1+incompatible h1:tKJnvO2kl0zmb/jA5UKAt4VoEVw1qxKWjE/Bpp46npY=
|
||||
github.com/cenkalti/backoff v2.1.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
|
||||
github.com/census-instrumentation/opencensus-proto v0.1.0-0.20181214143942-ba49f56771b8 h1:gUqsFVdUKoRHNg8fkFd8gB5OOEa/g5EwlAHznb4zjbI=
|
||||
github.com/census-instrumentation/opencensus-proto v0.1.0-0.20181214143942-ba49f56771b8/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||
github.com/cpuguy83/go-md2man v1.0.8 h1:DwoNytLphI8hzS2Af4D0dfaEaiSq2bN05mEm4R6vf8M=
|
||||
github.com/cpuguy83/go-md2man v1.0.8/go.mod h1:N6JayAiVKtlHSnuTCeuLSQVs75hb8q+dYQLjr7cDsKY=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
||||
github.com/dnaeon/go-vcr v0.0.0-20180814043457-aafff18a5cc2 h1:G9/PqfhOrt8JXnw0DGTfVoOkKHDhOlEZqhE/cu+NvQM=
|
||||
github.com/dnaeon/go-vcr v0.0.0-20180814043457-aafff18a5cc2/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
|
||||
github.com/dnaeon/go-vcr v1.0.1 h1:r8L/HqC0Hje5AXMu1ooW8oyQyOFv4GxqpL0nRP7SLLY=
|
||||
github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
|
||||
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
||||
github.com/elithrar/simple-scrypt v1.3.0 h1:KIlOlxdoQf9JWKl5lMAJ28SY2URB0XTRDn2TckyzAZg=
|
||||
github.com/elithrar/simple-scrypt v1.3.0/go.mod h1:U2XQRI95XHY0St410VE3UjT7vuKb1qPwrl/EJwEqnZo=
|
||||
github.com/go-ini/ini v1.38.2 h1:6Hl/z3p3iFkA0dlDfzYxuFuUGD+kaweypF6btsR2/Q4=
|
||||
github.com/go-ini/ini v1.38.2/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
|
||||
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
|
||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
||||
github.com/gliderlabs/ssh v0.1.1/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
|
||||
github.com/go-ini/ini v1.41.0 h1:526aoxDtxRHFQKMZfcX2OG9oOI8TJ5yPLM0Mkno/uTY=
|
||||
github.com/go-ini/ini v1.41.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20180825215210-0210a2f0f73c h1:16eHWuMGvCjSfgRJKqIzapE78onvvTbdi1rMkU00lZw=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20180825215210-0210a2f0f73c/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||
github.com/hashicorp/golang-lru v0.5.0 h1:CL2msUPvZTLb5O648aiLNJw3hnBxN2+1Jq8rCOH9wdo=
|
||||
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ=
|
||||
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
|
||||
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
||||
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||
github.com/googleapis/gax-go v2.0.0+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY=
|
||||
github.com/googleapis/gax-go/v2 v2.0.3/go.mod h1:LLvjysVCY1JZeum8Z6l8qUty8fiNwE08qbEPm1M08qg=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181103185306-d547d1d9531e h1:JKmoR8x90Iww1ks85zJ1lfDGgIiMDuIptTOhJq+zKyg=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181103185306-d547d1d9531e/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.5.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.6.2/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.7.0 h1:tPFY/SM+d656aSgLWO2Eckc3ExwpwwybwdN5Ph20h1A=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.7.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
|
||||
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
|
||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
github.com/jellevandenhooff/dkim v0.0.0-20150330215556-f50fe3d243e1/go.mod h1:E0B/fFc00Y+Rasa88328GlI/XbtyysCtTHZS8h7IrBU=
|
||||
github.com/jtolds/gls v4.2.1+incompatible h1:fSuqC+Gmlu6l/ZYAoZzx2pyucC8Xza35fpRVWLVmUEE=
|
||||
github.com/jtolds/gls v4.2.1+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
||||
github.com/juju/ratelimit v1.0.1 h1:+7AIFJVQ0EQgq/K9+0Krm7m530Du7tIz0METWzN0RgY=
|
||||
github.com/juju/ratelimit v1.0.1/go.mod h1:qapgC/Gy+xNh9UxzV13HGGl/6UXNN+ct+vwSgWNm/qk=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/kr/fs v0.1.0 h1:Jskdu9ieNAYnjxsi0LbQp1ulIKZV1LAFgK1tWhpZgl8=
|
||||
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
|
||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/pty v1.1.3/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kurin/blazer v0.5.1 h1:mBc4i1uhHJEqU0KvzOgpMHhkwf+EcXvxjWEUS7HG+eY=
|
||||
github.com/kurin/blazer v0.5.1/go.mod h1:4FCXMUWo9DllR2Do4TtBd377ezyAJ51vB5uTBjt0pGU=
|
||||
github.com/kurin/blazer v0.5.3 h1:SAgYv0TKU0kN/ETfO5ExjNAPyMt2FocO2s/UlCHfjAk=
|
||||
github.com/kurin/blazer v0.5.3/go.mod h1:4FCXMUWo9DllR2Do4TtBd377ezyAJ51vB5uTBjt0pGU=
|
||||
github.com/marstr/guid v1.1.0 h1:/M4H/1G4avsieL6BbUwCOBzulmoeKVP5ux/3mQNnbyI=
|
||||
github.com/marstr/guid v1.1.0/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho=
|
||||
github.com/mattn/go-isatty v0.0.4 h1:bnP0vzxcAdeI1zdubAl5PjU6zsERjGZb7raWodagDYs=
|
||||
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||
github.com/minio/minio-go v6.0.7+incompatible h1:nWABqotkiT/3aLgFnG30doQiwFkDMM9xnGGQnS+Ao6M=
|
||||
github.com/minio/minio-go v6.0.7+incompatible/go.mod h1:7guKYtitv8dktvNUGrhzmNlA5wrAABTQXCoesZdFQO8=
|
||||
github.com/mitchellh/go-homedir v1.0.0 h1:vKb8ShqSby24Yrqr/yDYkuFz8d0WUjys40rvnGC8aR0=
|
||||
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
||||
github.com/ncw/swift v1.0.41 h1:kfoTVQKt1A4n0m1Q3YWku9OoXfpo06biqVfi73yseBs=
|
||||
github.com/ncw/swift v1.0.41/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM=
|
||||
github.com/pkg/errors v0.8.0 h1:WdK/asTD0HN+q6hsWO3/vpuAkAr+tw6aNJNDFFf0+qw=
|
||||
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/mattn/go-isatty v0.0.7 h1:UvyT9uN+3r7yLEYSlJsbQGdsaB/a0DlgWP3pql6iwOc=
|
||||
github.com/mattn/go-isatty v0.0.7/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||
github.com/microcosm-cc/bluemonday v1.0.1/go.mod h1:hsXNsILzKxV+sX77C5b8FSuKF00vh2OMYv+xgHpAMF4=
|
||||
github.com/minio/minio-go v6.0.14+incompatible h1:fnV+GD28LeqdN6vT2XdGKW8Qe/IfjJDswNVuni6km9o=
|
||||
github.com/minio/minio-go v6.0.14+incompatible/go.mod h1:7guKYtitv8dktvNUGrhzmNlA5wrAABTQXCoesZdFQO8=
|
||||
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
|
||||
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
||||
github.com/ncw/swift v1.0.45 h1:n6MfkuP599wWdcIOiBv4ESRodkzvudF65hNgNXe6tj0=
|
||||
github.com/ncw/swift v1.0.45/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM=
|
||||
github.com/neelance/astrewrite v0.0.0-20160511093645-99348263ae86/go.mod h1:kHJEU3ofeGjhHklVoIGuVj85JJwZ6kWPaJwCIxgnFmo=
|
||||
github.com/neelance/sourcemap v0.0.0-20151028013722-8c68805598ab/go.mod h1:Qr6/a/Q4r9LP1IltGz7tA7iOK1WonHEYhu1HRBA7ZiM=
|
||||
github.com/openzipkin/zipkin-go v0.1.1/go.mod h1:NtoC/o8u3JlF1lSlyPNswIbeQH9bJTmOf0Erfk+hxe8=
|
||||
github.com/openzipkin/zipkin-go v0.1.3/go.mod h1:NtoC/o8u3JlF1lSlyPNswIbeQH9bJTmOf0Erfk+hxe8=
|
||||
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/profile v1.2.1 h1:F++O52m40owAmADcojzM+9gyjmMOY/T4oYJkgFDH8RE=
|
||||
github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
|
||||
github.com/pkg/sftp v1.8.2 h1:3upwlsK5/USEeM5gzIe9eWdzU4sV+kG3gKKg3RLBuWE=
|
||||
github.com/pkg/sftp v1.8.2/go.mod h1:NxmoDg/QLVWluQDUYG7XBZTLUpKeFa8e3aMf1BfjyHk=
|
||||
github.com/pkg/xattr v0.3.1 h1:6ceg5jxT3cH4lM5n8S2PmiNeOv61MK08yvvYJwyrPH0=
|
||||
github.com/pkg/xattr v0.3.1/go.mod h1:CBdxFOf0VLbaj6HKuP2ITOVV7NY6ycPKgIgnSx2ZNVs=
|
||||
github.com/pkg/sftp v1.10.0 h1:DGA1KlA9esU6WcicH+P8PxFZOl15O6GYtab1cIJdOlE=
|
||||
github.com/pkg/sftp v1.10.0/go.mod h1:NxmoDg/QLVWluQDUYG7XBZTLUpKeFa8e3aMf1BfjyHk=
|
||||
github.com/pkg/xattr v0.4.0 h1:OacIpDCc4H+4b/bWpYBLOT5gXk7G/jwx5O1D8x8Zewo=
|
||||
github.com/pkg/xattr v0.4.0/go.mod h1:W2cGD0TBEus7MkUgv0tNZ9JutLtVO3cXu+IBRuHqnFs=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||
github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||
github.com/prometheus/common v0.0.0-20181218105931-67670fe90761/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||
github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/restic/chunker v0.2.0 h1:GjvmvFuv2mx0iekZs+iAlrioo2UtgsGSSplvoXaVHDU=
|
||||
github.com/restic/chunker v0.2.0/go.mod h1:VdjruEj+7BU1ZZTW8Qqi1exxRx2Omf2JH0NsUEkQ29s=
|
||||
github.com/russross/blackfriday v1.5.1 h1:B8ZN6pD4PVofmlDCDUdELeYrbsVIDM/bpjW3v3zgcRc=
|
||||
github.com/russross/blackfriday v1.5.1/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
|
||||
github.com/russross/blackfriday v1.5.2 h1:HyvC0ARfnZBqnXwABFeSZHpKvJHJJfPz81GNueLj0oo=
|
||||
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
|
||||
github.com/satori/go.uuid v1.2.0 h1:0uYX9dsZ2yD7q2RtLRtPSdGDWzjeM3TbMJP9utgA0ww=
|
||||
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
|
||||
github.com/smartystreets/assertions v0.0.0-20180820201707-7c9eb446e3cf h1:6V1qxN6Usn4jy8unvggSJz/NC790tefw8Zdy6OZS5co=
|
||||
github.com/smartystreets/assertions v0.0.0-20180820201707-7c9eb446e3cf/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
|
||||
github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a h1:JSvGDIbmil4Ui/dDdFBExb7/cmkNjyX5F97oglmvCDo=
|
||||
github.com/smartystreets/goconvey v0.0.0-20180222194500-ef6db91d284a/go.mod h1:XDJAKZRPZ1CvBcN2aX5YOUTYGHki24fSF0Iv48Ibg0s=
|
||||
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
|
||||
github.com/shurcooL/component v0.0.0-20170202220835-f88ec8f54cc4/go.mod h1:XhFIlyj5a1fBNx5aJTbKoIq0mNaPvOagO+HjB3EtxrY=
|
||||
github.com/shurcooL/events v0.0.0-20181021180414-410e4ca65f48/go.mod h1:5u70Mqkb5O5cxEA8nxTsgrgLehJeAw6Oc4Ab1c/P1HM=
|
||||
github.com/shurcooL/github_flavored_markdown v0.0.0-20181002035957-2122de532470/go.mod h1:2dOwnU2uBioM+SGy2aZoq1f/Sd1l9OkAeAUvjSyvgU0=
|
||||
github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk=
|
||||
github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ=
|
||||
github.com/shurcooL/gofontwoff v0.0.0-20180329035133-29b52fc0a18d/go.mod h1:05UtEgK5zq39gLST6uB0cf3NEHjETfB4Fgr3Gx5R9Vw=
|
||||
github.com/shurcooL/gopherjslib v0.0.0-20160914041154-feb6d3990c2c/go.mod h1:8d3azKNyqcHP1GaQE/c6dDgjkgSx2BZ4IoEi4F1reUI=
|
||||
github.com/shurcooL/highlight_diff v0.0.0-20170515013008-09bb4053de1b/go.mod h1:ZpfEhSmds4ytuByIcDnOLkTHGUI6KNqRNPDLHDk+mUU=
|
||||
github.com/shurcooL/highlight_go v0.0.0-20181028180052-98c3abbbae20/go.mod h1:UDKB5a1T23gOMUJrI+uSuH0VRDStOiUVSjBTRDVBVag=
|
||||
github.com/shurcooL/home v0.0.0-20181020052607-80b7ffcb30f9/go.mod h1:+rgNQw2P9ARFAs37qieuu7ohDNQ3gds9msbT2yn85sg=
|
||||
github.com/shurcooL/htmlg v0.0.0-20170918183704-d01228ac9e50/go.mod h1:zPn1wHpTIePGnXSHpsVPWEktKXHr6+SS6x/IKRb7cpw=
|
||||
github.com/shurcooL/httperror v0.0.0-20170206035902-86b7830d14cc/go.mod h1:aYMfkZ6DWSJPJ6c4Wwz3QtW22G7mf/PEgaB9k/ik5+Y=
|
||||
github.com/shurcooL/httpfs v0.0.0-20171119174359-809beceb2371/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
|
||||
github.com/shurcooL/httpgzip v0.0.0-20180522190206-b1c53ac65af9/go.mod h1:919LwcH0M7/W4fcZ0/jy0qGght1GIhqyS/EgWGH2j5Q=
|
||||
github.com/shurcooL/issues v0.0.0-20181008053335-6292fdc1e191/go.mod h1:e2qWDig5bLteJ4fwvDAc2NHzqFEthkqn7aOZAOpj+PQ=
|
||||
github.com/shurcooL/issuesapp v0.0.0-20180602232740-048589ce2241/go.mod h1:NPpHK2TI7iSaM0buivtFUc9offApnI0Alt/K8hcHy0I=
|
||||
github.com/shurcooL/notifications v0.0.0-20181007000457-627ab5aea122/go.mod h1:b5uSkrEVM1jQUspwbixRBhaIjIzL2xazXp6kntxYle0=
|
||||
github.com/shurcooL/octicon v0.0.0-20181028054416-fa4f57f9efb2/go.mod h1:eWdoE5JD4R5UVWDucdOPg1g2fqQRq78IQa9zlOV1vpQ=
|
||||
github.com/shurcooL/reactions v0.0.0-20181006231557-f2e0b4ca5b82/go.mod h1:TCR1lToEk4d2s07G3XGfz2QrgHXg4RJBvjrOozvoWfk=
|
||||
github.com/shurcooL/sanitized_anchor_name v0.0.0-20170918181015-86672fcb3f95/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
|
||||
github.com/shurcooL/users v0.0.0-20180125191416-49c67e49c537/go.mod h1:QJTqeLYEDaXHZDBsXlPCDqdhQuJkuw4NOtaxYe3xii4=
|
||||
github.com/shurcooL/webdavfs v0.0.0-20170829043945-18c3829fa133/go.mod h1:hKmq5kWdCj2z2KEozexVbfEZIWiTjhE0+UjmZgPqehw=
|
||||
github.com/smartystreets/assertions v0.0.0-20190116191733-b6c0e53d7304 h1:Jpy1PXuP99tXNrhbq2BaPz9B+jNAvH1JPQQpG/9GCXY=
|
||||
github.com/smartystreets/assertions v0.0.0-20190116191733-b6c0e53d7304/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
|
||||
github.com/smartystreets/goconvey v0.0.0-20181108003508-044398e4856c h1:Ho+uVpkel/udgjbwB5Lktg9BtvJSh2DT0Hi6LPSyI2w=
|
||||
github.com/smartystreets/goconvey v0.0.0-20181108003508-044398e4856c/go.mod h1:XDJAKZRPZ1CvBcN2aX5YOUTYGHki24fSF0Iv48Ibg0s=
|
||||
github.com/sourcegraph/annotate v0.0.0-20160123013949-f4cad6c6324d/go.mod h1:UdhH50NIW0fCiwBSr0co2m7BnFLdv4fQTgdqdJTHFeE=
|
||||
github.com/sourcegraph/syntaxhighlight v0.0.0-20170531221838-bd320f5d308e/go.mod h1:HuIsMU8RRBOtsCgI77wP899iHVBQpCmg4ErYMZB+2IA=
|
||||
github.com/spf13/cobra v0.0.3 h1:ZlrZ4XsMRm04Fr5pSFxBgfND2EBVa1nLpiy1stUsX/8=
|
||||
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
|
||||
github.com/spf13/pflag v1.0.2 h1:Fy0orTDgHdbnzHcsOgfCN4LtHf0ec3wwtiwJqwvf3Gc=
|
||||
github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
|
||||
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793 h1:u+LnwYTOOW7Ukr/fppxEb1Nwz0AtPflrblfvUudpo+I=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
|
||||
go.opencensus.io v0.18.0/go.mod h1:vKdFvxhtzZ9onBp9VKHK8z/sRpBMnKAsufL7wlDrCOA=
|
||||
go.opencensus.io v0.18.1-0.20181204023538-aab39bd6a98b/go.mod h1:vKdFvxhtzZ9onBp9VKHK8z/sRpBMnKAsufL7wlDrCOA=
|
||||
go.opencensus.io v0.19.0 h1:+jrnNy8MR4GZXvwF9PEuSyHxA4NaTf6601oNRwCSXq0=
|
||||
go.opencensus.io v0.19.0/go.mod h1:AYeH0+ZxYyghG8diqaaIq/9P3VgCCt5GF2ldCY4dkFg=
|
||||
go4.org v0.0.0-20180809161055-417644f6feb5/go.mod h1:MkTOUMDaeVYJUOUsaDXIhWPZYa1yOyC1qaOBpL57BhE=
|
||||
golang.org/x/build v0.0.0-20190111050920-041ab4dc3f9d/go.mod h1:OWs+y06UdEOHN4y+MfF/py+xQ/tYqIWW03b70/CG9Rw=
|
||||
golang.org/x/crypto v0.0.0-20181030102418-4d3f4d9ffa16/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190208162236-193df9c0f06f h1:ETU2VEl7TnT5bl7IvuKEzTDpplg5wzGYsOCAPhdoEIg=
|
||||
golang.org/x/crypto v0.0.0-20190208162236-193df9c0f06f/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20181217174547-8f45f776aaf1/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181029044818-c44066c5c816/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181106065722-10aee1819953/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181217023233-e147a9138326/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190206173232-65e2d4e15006 h1:bfLnR+k0tq5Lqt6dflRLcZiz6UaXCMt3vhYJ1l4FQ80=
|
||||
golang.org/x/net v0.0.0-20190206173232-65e2d4e15006/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be h1:vEDujvNQGv4jgYKudGeI/+DAX4Jffq6hpD55MmoEvKs=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20181017192945-9dcd33a902f4/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20181203162652-d668ce993890/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190130055435-99b60b757ec1 h1:VeAkjQVzKLmu+JnFcK96TPbkuaTIqwGGAzQ9hgwPjVg=
|
||||
golang.org/x/oauth2 v0.0.0-20190130055435-99b60b757ec1/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/perf v0.0.0-20180704124530-6e6d33e29852/go.mod h1:JLpeXjPJfIyPr5TlbXLkXWLhP8nz10XfvxElABhCtcw=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f h1:wMNYb4v58l5UBM7MYRLPG6ZhfOqbKu7X5eyFl8ZhKvA=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180525142821-c11f84a56e43/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180907202204-917fdcba135d h1:kWn1hlsqeUrk6JsLJO0ZFyz9bMg8u85voZlIuc68ZU4=
|
||||
golang.org/x/sys v0.0.0-20180907202204-917fdcba135d/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 h1:YUO/7uOKsKeq9UokNS62b8FYywz3ker1l1vDZRCRefw=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181021155630-eda9bb28ed51/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181029174526-d69651ed3497/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181218192612-074acd46bca6/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190316082340-a2f829d7f35f h1:yCrMx/EeIue0+Qca57bWZS7VX6ymEoypmhWyPhz0NHM=
|
||||
golang.org/x/sys v0.0.0-20190316082340-a2f829d7f35f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
google.golang.org/api v0.0.0-20180907210053-b609d5e6b7ab h1:qNpJa8m9WofZ7RLj+7o15Ppapwm30+RweyIDSNpw8ps=
|
||||
google.golang.org/api v0.0.0-20180907210053-b609d5e6b7ab/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2 h1:z99zHgr7hKfrUcX/KsoJk5FJfjTceCKIp96+biqP4To=
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20181030000716-a0a13e073c7b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20181219222714-6e267b5cc78e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
google.golang.org/api v0.0.0-20180910000450-7ca32eb868bf/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
|
||||
google.golang.org/api v0.0.0-20181030000543-1d582fd0359e/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
|
||||
google.golang.org/api v0.0.0-20181220000619-583d854617af/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
|
||||
google.golang.org/api v0.1.0 h1:K6z2u68e86TPdSdefXdzvXgR1zEMa+459vBSfWYAZkI=
|
||||
google.golang.org/api v0.1.0/go.mod h1:UGEZY7KEX120AnNLIHFMKIo4obdJhkp2tPbaPlQx13Y=
|
||||
google.golang.org/appengine v1.1.0 h1:igQkv0AAhEIvTEpD5LIpAfav2eeVO9HBTjvKHVJPRSs=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.3.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20181029155118-b69ba1387ce2/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20181202183823-bd91e49a0898/go.mod h1:7Ep/1NZk928CDR8SjdVbjWNpdIf6nzjE3BTgJDr2Atg=
|
||||
google.golang.org/genproto v0.0.0-20181219182458-5a97ab628bfb/go.mod h1:7Ep/1NZk928CDR8SjdVbjWNpdIf6nzjE3BTgJDr2Atg=
|
||||
google.golang.org/genproto v0.0.0-20190201180003-4b09977fb922 h1:mBVYJnbrXLA/ZCBTCe7PtEgAUP+1bg92qTaFoPHdz+8=
|
||||
google.golang.org/genproto v0.0.0-20190201180003-4b09977fb922/go.mod h1:L3J43x8/uS+qIUoksaLKe6OS3nUKxOKuIFz1sl2/jx4=
|
||||
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
|
||||
google.golang.org/grpc v1.15.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio=
|
||||
google.golang.org/grpc v1.16.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio=
|
||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||
google.golang.org/grpc v1.18.0 h1:IZl7mfBGfbhYx2p2rKRtYgDFw6SBz+kclmxYrCksPPA=
|
||||
google.golang.org/grpc v1.18.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/ini.v1 v1.38.2 h1:dGcbywv4RufeGeiMycPT/plKB5FtmLKLnWKwBiLhUA4=
|
||||
gopkg.in/ini.v1 v1.38.2/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
|
||||
gopkg.in/ini.v1 v1.41.0 h1:Ka3ViY6gNYSKiVy71zXBEqKplnV35ImDLVG+8uoIklE=
|
||||
gopkg.in/ini.v1 v1.41.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
gopkg.in/tomb.v2 v2.0.0-20161208151619-d5d1b5820637 h1:yiW+nvdHb9LVqSHQBXfZCieqV4fzYhNBql77zY0ykqs=
|
||||
gopkg.in/tomb.v2 v2.0.0-20161208151619-d5d1b5820637/go.mod h1:BHsqpu/nsuzkT5BpiH1EMZPLyqSMM8JbIavyFACoFNk=
|
||||
gopkg.in/yaml.v2 v2.2.1 h1:mUhvW9EsL+naU5Q3cakzfE91YhliOondGd6ZrsDBHQE=
|
||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
grpc.go4.org v0.0.0-20170609214715-11d0a25b4919/go.mod h1:77eQGdRu53HpSqPFJFmuJdjuHRquDANNeA4x7B8WQ9o=
|
||||
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20180920025451-e3ad64cb4ed3/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
sourcegraph.com/sourcegraph/go-diff v0.5.0/go.mod h1:kuch7UrkMzY0X+p9CRK03kfuPQ2zzQcaEFbx8wA8rck=
|
||||
sourcegraph.com/sqs/pbtypes v0.0.0-20180604144634-d3ebe8f20ae4/go.mod h1:ketZ/q3QxT9HOBeFhu6RdvsftgpsbFHBF5Cas6cDKZ0=
|
||||
|
||||
@@ -78,7 +78,8 @@ type Archiver struct {
|
||||
// WithAtime configures if the access time for files and directories should
|
||||
// be saved. Enabling it may result in much metadata, so it's off by
|
||||
// default.
|
||||
WithAtime bool
|
||||
WithAtime bool
|
||||
IgnoreInode bool
|
||||
}
|
||||
|
||||
// Options is used to configure the archiver.
|
||||
@@ -133,6 +134,7 @@ func New(repo restic.Repository, fs fs.FS, opts Options) *Archiver {
|
||||
CompleteItem: func(string, *restic.Node, *restic.Node, ItemStats, time.Duration) {},
|
||||
StartFile: func(string) {},
|
||||
CompleteBlob: func(string, uint64) {},
|
||||
IgnoreInode: false,
|
||||
}
|
||||
|
||||
return arch
|
||||
@@ -383,7 +385,7 @@ func (arch *Archiver) Save(ctx context.Context, snPath, target string, previous
|
||||
}
|
||||
|
||||
// use previous node if the file hasn't changed
|
||||
if previous != nil && !fileChanged(fi, previous) {
|
||||
if previous != nil && !fileChanged(fi, previous, arch.IgnoreInode) {
|
||||
debug.Log("%v hasn't changed, returning old node", target)
|
||||
arch.CompleteItem(snPath, previous, previous, ItemStats{}, time.Since(start))
|
||||
arch.CompleteBlob(snPath, previous.Size)
|
||||
@@ -436,7 +438,7 @@ func (arch *Archiver) Save(ctx context.Context, snPath, target string, previous
|
||||
|
||||
// fileChanged returns true if the file's content has changed since the node
|
||||
// was created.
|
||||
func fileChanged(fi os.FileInfo, node *restic.Node) bool {
|
||||
func fileChanged(fi os.FileInfo, node *restic.Node, ignoreInode bool) bool {
|
||||
if node == nil {
|
||||
return true
|
||||
}
|
||||
@@ -458,7 +460,7 @@ func fileChanged(fi os.FileInfo, node *restic.Node) bool {
|
||||
}
|
||||
|
||||
// check inode
|
||||
if node.Inode != extFI.Inode {
|
||||
if !ignoreInode && node.Inode != extFI.Inode {
|
||||
return true
|
||||
}
|
||||
|
||||
|
||||
@@ -160,7 +160,6 @@ func TestArchiverSaveFileReaderFS(t *testing.T) {
|
||||
var tests = []struct {
|
||||
Data string
|
||||
}{
|
||||
{Data: ""},
|
||||
{Data: "foo"},
|
||||
{Data: string(restictest.Random(23, 12*1024*1024+1287898))},
|
||||
}
|
||||
@@ -271,7 +270,6 @@ func TestArchiverSaveReaderFS(t *testing.T) {
|
||||
var tests = []struct {
|
||||
Data string
|
||||
}{
|
||||
{Data: ""},
|
||||
{Data: "foo"},
|
||||
{Data: string(restictest.Random(23, 12*1024*1024+1287898))},
|
||||
}
|
||||
@@ -557,9 +555,11 @@ func TestFileChanged(t *testing.T) {
|
||||
}
|
||||
|
||||
var tests = []struct {
|
||||
Name string
|
||||
Content []byte
|
||||
Modify func(t testing.TB, filename string)
|
||||
Name string
|
||||
Content []byte
|
||||
Modify func(t testing.TB, filename string)
|
||||
IgnoreInode bool
|
||||
Check bool
|
||||
}{
|
||||
{
|
||||
Name: "same-content-new-file",
|
||||
@@ -598,6 +598,18 @@ func TestFileChanged(t *testing.T) {
|
||||
save(t, filename, defaultContent)
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "ignore-inode",
|
||||
Modify: func(t testing.TB, filename string) {
|
||||
fi := lstat(t, filename)
|
||||
remove(t, filename)
|
||||
sleep()
|
||||
save(t, filename, defaultContent)
|
||||
setTimestamp(t, filename, fi.ModTime(), fi.ModTime())
|
||||
},
|
||||
IgnoreInode: true,
|
||||
Check: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
@@ -615,15 +627,19 @@ func TestFileChanged(t *testing.T) {
|
||||
fiBefore := lstat(t, filename)
|
||||
node := nodeFromFI(t, filename, fiBefore)
|
||||
|
||||
if fileChanged(fiBefore, node) {
|
||||
if fileChanged(fiBefore, node, false) {
|
||||
t.Fatalf("unchanged file detected as changed")
|
||||
}
|
||||
|
||||
test.Modify(t, filename)
|
||||
|
||||
fiAfter := lstat(t, filename)
|
||||
if !fileChanged(fiAfter, node) {
|
||||
t.Fatalf("modified file detected as unchanged")
|
||||
if test.Check == fileChanged(fiAfter, node, test.IgnoreInode) {
|
||||
if test.Check {
|
||||
t.Fatalf("unmodified file detected as changed")
|
||||
} else {
|
||||
t.Fatalf("modified file detected as unchanged")
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -639,7 +655,7 @@ func TestFilChangedSpecialCases(t *testing.T) {
|
||||
|
||||
t.Run("nil-node", func(t *testing.T) {
|
||||
fi := lstat(t, filename)
|
||||
if !fileChanged(fi, nil) {
|
||||
if !fileChanged(fi, nil, false) {
|
||||
t.Fatal("nil node detected as unchanged")
|
||||
}
|
||||
})
|
||||
@@ -648,7 +664,7 @@ func TestFilChangedSpecialCases(t *testing.T) {
|
||||
fi := lstat(t, filename)
|
||||
node := nodeFromFI(t, filename, fi)
|
||||
node.Type = "symlink"
|
||||
if !fileChanged(fi, node) {
|
||||
if !fileChanged(fi, node, false) {
|
||||
t.Fatal("node with changed type detected as unchanged")
|
||||
}
|
||||
})
|
||||
|
||||
@@ -127,8 +127,6 @@ func New(cfg Config, lim limiter.Limiter) (*Backend, error) {
|
||||
return nil, err
|
||||
}
|
||||
args = append(args, a...)
|
||||
} else {
|
||||
args = append(args, "rclone")
|
||||
}
|
||||
|
||||
// then add the arguments
|
||||
@@ -139,10 +137,6 @@ func New(cfg Config, lim limiter.Limiter) (*Backend, error) {
|
||||
}
|
||||
|
||||
args = append(args, a...)
|
||||
} else {
|
||||
args = append(args,
|
||||
"serve", "restic", "--stdio",
|
||||
"--b2-hard-delete", "--drive-use-trash=false")
|
||||
}
|
||||
|
||||
// finally, add the remote
|
||||
|
||||
@@ -15,15 +15,19 @@ type Config struct {
|
||||
Connections uint `option:"connections" help:"set a limit for the number of concurrent connections (default: 5)"`
|
||||
}
|
||||
|
||||
var defaultConfig = Config{
|
||||
Program: "rclone",
|
||||
Args: "serve restic --stdio --b2-hard-delete --drive-use-trash=false",
|
||||
Connections: 5,
|
||||
}
|
||||
|
||||
func init() {
|
||||
options.Register("rclone", Config{})
|
||||
}
|
||||
|
||||
// NewConfig returns a new Config with the default values filled in.
|
||||
func NewConfig() Config {
|
||||
return Config{
|
||||
Connections: 5,
|
||||
}
|
||||
return defaultConfig
|
||||
}
|
||||
|
||||
// ParseConfig parses the string s and extracts the remote server URL.
|
||||
|
||||
@@ -14,7 +14,9 @@ func TestParseConfig(t *testing.T) {
|
||||
"rclone:local:foo:/bar",
|
||||
Config{
|
||||
Remote: "local:foo:/bar",
|
||||
Connections: 5,
|
||||
Program: defaultConfig.Program,
|
||||
Args: defaultConfig.Args,
|
||||
Connections: defaultConfig.Connections,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@ type Config struct {
|
||||
Bucket string
|
||||
Prefix string
|
||||
Layout string `option:"layout" help:"use this backend layout (default: auto-detect)"`
|
||||
StorageClass string `option:"storage-class" help:"set S3 storage class (STANDARD, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING or REDUCED_REDUNDANCY)"`
|
||||
|
||||
Connections uint `option:"connections" help:"set a limit for the number of concurrent connections (default: 5)"`
|
||||
MaxRetries uint `option:"retries" help:"set the number of retries attempted"`
|
||||
|
||||
@@ -260,7 +260,7 @@ func (be *Backend) Save(ctx context.Context, h restic.Handle, rd restic.RewindRe
|
||||
be.sem.GetToken()
|
||||
defer be.sem.ReleaseToken()
|
||||
|
||||
opts := minio.PutObjectOptions{}
|
||||
opts := minio.PutObjectOptions{StorageClass: be.cfg.StorageClass}
|
||||
opts.ContentType = "application/octet-stream"
|
||||
|
||||
debug.Log("PutObject(%v, %v, %v)", be.cfg.Bucket, objName, rd.Length())
|
||||
|
||||
@@ -23,6 +23,11 @@ type Config struct {
|
||||
StorageURL string
|
||||
AuthToken string
|
||||
|
||||
// auth v3 only
|
||||
ApplicationCredentialID string
|
||||
ApplicationCredentialName string
|
||||
ApplicationCredentialSecret string
|
||||
|
||||
Container string
|
||||
Prefix string
|
||||
DefaultContainerPolicy string
|
||||
@@ -96,6 +101,11 @@ func ApplyEnvironment(prefix string, cfg interface{}) error {
|
||||
{&c.UserName, prefix + "ST_USER"},
|
||||
{&c.APIKey, prefix + "ST_KEY"},
|
||||
|
||||
// Application Credential auth
|
||||
{&c.ApplicationCredentialID, prefix + "OS_APPLICATION_CREDENTIAL_ID"},
|
||||
{&c.ApplicationCredentialName, prefix + "OS_APPLICATION_CREDENTIAL_NAME"},
|
||||
{&c.ApplicationCredentialSecret, prefix + "OS_APPLICATION_CREDENTIAL_SECRET"},
|
||||
|
||||
// Manual authentication
|
||||
{&c.StorageURL, prefix + "OS_STORAGE_URL"},
|
||||
{&c.AuthToken, prefix + "OS_AUTH_TOKEN"},
|
||||
|
||||
@@ -43,19 +43,22 @@ func Open(cfg Config, rt http.RoundTripper) (restic.Backend, error) {
|
||||
|
||||
be := &beSwift{
|
||||
conn: &swift.Connection{
|
||||
UserName: cfg.UserName,
|
||||
Domain: cfg.Domain,
|
||||
ApiKey: cfg.APIKey,
|
||||
AuthUrl: cfg.AuthURL,
|
||||
Region: cfg.Region,
|
||||
Tenant: cfg.Tenant,
|
||||
TenantId: cfg.TenantID,
|
||||
TenantDomain: cfg.TenantDomain,
|
||||
TrustId: cfg.TrustID,
|
||||
StorageUrl: cfg.StorageURL,
|
||||
AuthToken: cfg.AuthToken,
|
||||
ConnectTimeout: time.Minute,
|
||||
Timeout: time.Minute,
|
||||
UserName: cfg.UserName,
|
||||
Domain: cfg.Domain,
|
||||
ApiKey: cfg.APIKey,
|
||||
AuthUrl: cfg.AuthURL,
|
||||
Region: cfg.Region,
|
||||
Tenant: cfg.Tenant,
|
||||
TenantId: cfg.TenantID,
|
||||
TenantDomain: cfg.TenantDomain,
|
||||
TrustId: cfg.TrustID,
|
||||
StorageUrl: cfg.StorageURL,
|
||||
AuthToken: cfg.AuthToken,
|
||||
ApplicationCredentialId: cfg.ApplicationCredentialID,
|
||||
ApplicationCredentialName: cfg.ApplicationCredentialName,
|
||||
ApplicationCredentialSecret: cfg.ApplicationCredentialSecret,
|
||||
ConnectTimeout: time.Minute,
|
||||
Timeout: time.Minute,
|
||||
|
||||
Transport: rt,
|
||||
},
|
||||
|
||||
@@ -79,7 +79,7 @@ func (s *Suite) TestConfig(t *testing.T) {
|
||||
var testString = "Config"
|
||||
|
||||
// create config and read it back
|
||||
_, err := backend.LoadAll(context.TODO(), b, restic.Handle{Type: restic.ConfigFile})
|
||||
_, err := backend.LoadAll(context.TODO(), nil, b, restic.Handle{Type: restic.ConfigFile})
|
||||
if err == nil {
|
||||
t.Fatalf("did not get expected error for non-existing config")
|
||||
}
|
||||
@@ -93,7 +93,7 @@ func (s *Suite) TestConfig(t *testing.T) {
|
||||
// same config
|
||||
for _, name := range []string{"", "foo", "bar", "0000000000000000000000000000000000000000000000000000000000000000"} {
|
||||
h := restic.Handle{Type: restic.ConfigFile, Name: name}
|
||||
buf, err := backend.LoadAll(context.TODO(), b, h)
|
||||
buf, err := backend.LoadAll(context.TODO(), nil, b, h)
|
||||
if err != nil {
|
||||
t.Fatalf("unable to read config with name %q: %+v", name, err)
|
||||
}
|
||||
@@ -491,7 +491,7 @@ func (s *Suite) TestSave(t *testing.T) {
|
||||
err := b.Save(context.TODO(), h, restic.NewByteReader(data))
|
||||
test.OK(t, err)
|
||||
|
||||
buf, err := backend.LoadAll(context.TODO(), b, h)
|
||||
buf, err := backend.LoadAll(context.TODO(), nil, b, h)
|
||||
test.OK(t, err)
|
||||
if len(buf) != len(data) {
|
||||
t.Fatalf("number of bytes does not match, want %v, got %v", len(data), len(buf))
|
||||
@@ -584,7 +584,7 @@ func (s *Suite) TestSaveFilenames(t *testing.T) {
|
||||
continue
|
||||
}
|
||||
|
||||
buf, err := backend.LoadAll(context.TODO(), b, h)
|
||||
buf, err := backend.LoadAll(context.TODO(), nil, b, h)
|
||||
if err != nil {
|
||||
t.Errorf("test %d failed: Load() returned %+v", i, err)
|
||||
continue
|
||||
@@ -734,7 +734,7 @@ func (s *Suite) TestBackend(t *testing.T) {
|
||||
|
||||
// test Load()
|
||||
h := restic.Handle{Type: tpe, Name: ts.id}
|
||||
buf, err := backend.LoadAll(context.TODO(), b, h)
|
||||
buf, err := backend.LoadAll(context.TODO(), nil, b, h)
|
||||
test.OK(t, err)
|
||||
test.Equals(t, ts.data, string(buf))
|
||||
|
||||
|
||||
@@ -1,20 +1,33 @@
|
||||
package backend
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
|
||||
"github.com/restic/restic/internal/restic"
|
||||
)
|
||||
|
||||
// LoadAll reads all data stored in the backend for the handle.
|
||||
func LoadAll(ctx context.Context, be restic.Backend, h restic.Handle) (buf []byte, err error) {
|
||||
err = be.Load(ctx, h, 0, 0, func(rd io.Reader) (ierr error) {
|
||||
buf, ierr = ioutil.ReadAll(rd)
|
||||
return ierr
|
||||
// LoadAll reads all data stored in the backend for the handle into the given
|
||||
// buffer, which is truncated. If the buffer is not large enough or nil, a new
|
||||
// one is allocated.
|
||||
func LoadAll(ctx context.Context, buf []byte, be restic.Backend, h restic.Handle) ([]byte, error) {
|
||||
err := be.Load(ctx, h, 0, 0, func(rd io.Reader) error {
|
||||
// make sure this is idempotent, in case an error occurs this function may be called multiple times!
|
||||
wr := bytes.NewBuffer(buf[:0])
|
||||
_, cerr := io.Copy(wr, rd)
|
||||
if cerr != nil {
|
||||
return cerr
|
||||
}
|
||||
buf = wr.Bytes()
|
||||
return nil
|
||||
})
|
||||
return buf, err
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return buf, nil
|
||||
}
|
||||
|
||||
// LimitedReadCloser wraps io.LimitedReader and exposes the Close() method.
|
||||
|
||||
@@ -19,6 +19,7 @@ const MiB = 1 << 20
|
||||
|
||||
func TestLoadAll(t *testing.T) {
|
||||
b := mem.New()
|
||||
var buf []byte
|
||||
|
||||
for i := 0; i < 20; i++ {
|
||||
data := rtest.Random(23+i, rand.Intn(MiB)+500*KiB)
|
||||
@@ -28,7 +29,7 @@ func TestLoadAll(t *testing.T) {
|
||||
err := b.Save(context.TODO(), h, restic.NewByteReader(data))
|
||||
rtest.OK(t, err)
|
||||
|
||||
buf, err := backend.LoadAll(context.TODO(), b, restic.Handle{Type: restic.DataFile, Name: id.String()})
|
||||
buf, err := backend.LoadAll(context.TODO(), buf, b, restic.Handle{Type: restic.DataFile, Name: id.String()})
|
||||
rtest.OK(t, err)
|
||||
|
||||
if len(buf) != len(data) {
|
||||
@@ -43,55 +44,66 @@ func TestLoadAll(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoadSmallBuffer(t *testing.T) {
|
||||
b := mem.New()
|
||||
|
||||
for i := 0; i < 20; i++ {
|
||||
data := rtest.Random(23+i, rand.Intn(MiB)+500*KiB)
|
||||
|
||||
id := restic.Hash(data)
|
||||
h := restic.Handle{Name: id.String(), Type: restic.DataFile}
|
||||
err := b.Save(context.TODO(), h, restic.NewByteReader(data))
|
||||
rtest.OK(t, err)
|
||||
|
||||
buf, err := backend.LoadAll(context.TODO(), b, restic.Handle{Type: restic.DataFile, Name: id.String()})
|
||||
rtest.OK(t, err)
|
||||
|
||||
if len(buf) != len(data) {
|
||||
t.Errorf("length of returned buffer does not match, want %d, got %d", len(data), len(buf))
|
||||
continue
|
||||
}
|
||||
|
||||
if !bytes.Equal(buf, data) {
|
||||
t.Errorf("wrong data returned")
|
||||
continue
|
||||
}
|
||||
func save(t testing.TB, be restic.Backend, buf []byte) restic.Handle {
|
||||
id := restic.Hash(buf)
|
||||
h := restic.Handle{Name: id.String(), Type: restic.DataFile}
|
||||
err := be.Save(context.TODO(), h, restic.NewByteReader(buf))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
func TestLoadLargeBuffer(t *testing.T) {
|
||||
func TestLoadAllAppend(t *testing.T) {
|
||||
b := mem.New()
|
||||
|
||||
for i := 0; i < 20; i++ {
|
||||
data := rtest.Random(23+i, rand.Intn(MiB)+500*KiB)
|
||||
h1 := save(t, b, []byte("foobar test string"))
|
||||
randomData := rtest.Random(23, rand.Intn(MiB)+500*KiB)
|
||||
h2 := save(t, b, randomData)
|
||||
|
||||
id := restic.Hash(data)
|
||||
h := restic.Handle{Name: id.String(), Type: restic.DataFile}
|
||||
err := b.Save(context.TODO(), h, restic.NewByteReader(data))
|
||||
rtest.OK(t, err)
|
||||
var tests = []struct {
|
||||
handle restic.Handle
|
||||
buf []byte
|
||||
want []byte
|
||||
}{
|
||||
{
|
||||
handle: h1,
|
||||
buf: nil,
|
||||
want: []byte("foobar test string"),
|
||||
},
|
||||
{
|
||||
handle: h1,
|
||||
buf: []byte("xxx"),
|
||||
want: []byte("foobar test string"),
|
||||
},
|
||||
{
|
||||
handle: h2,
|
||||
buf: nil,
|
||||
want: randomData,
|
||||
},
|
||||
{
|
||||
handle: h2,
|
||||
buf: make([]byte, 0, 200),
|
||||
want: randomData,
|
||||
},
|
||||
{
|
||||
handle: h2,
|
||||
buf: []byte("foobarbaz"),
|
||||
want: randomData,
|
||||
},
|
||||
}
|
||||
|
||||
buf, err := backend.LoadAll(context.TODO(), b, restic.Handle{Type: restic.DataFile, Name: id.String()})
|
||||
rtest.OK(t, err)
|
||||
for _, test := range tests {
|
||||
t.Run("", func(t *testing.T) {
|
||||
buf, err := backend.LoadAll(context.TODO(), test.buf, b, test.handle)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if len(buf) != len(data) {
|
||||
t.Errorf("length of returned buffer does not match, want %d, got %d", len(data), len(buf))
|
||||
continue
|
||||
}
|
||||
|
||||
if !bytes.Equal(buf, data) {
|
||||
t.Errorf("wrong data returned")
|
||||
continue
|
||||
}
|
||||
if !bytes.Equal(buf, test.want) {
|
||||
t.Errorf("wrong data returned, want %q, got %q", test.want, buf)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
4
internal/cache/backend_test.go
vendored
4
internal/cache/backend_test.go
vendored
@@ -17,7 +17,7 @@ import (
|
||||
)
|
||||
|
||||
func loadAndCompare(t testing.TB, be restic.Backend, h restic.Handle, data []byte) {
|
||||
buf, err := backend.LoadAll(context.TODO(), be, h)
|
||||
buf, err := backend.LoadAll(context.TODO(), nil, be, h)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
@@ -147,7 +147,7 @@ func TestErrorBackend(t *testing.T) {
|
||||
loadTest := func(wg *sync.WaitGroup, be restic.Backend) {
|
||||
defer wg.Done()
|
||||
|
||||
buf, err := backend.LoadAll(context.TODO(), be, h)
|
||||
buf, err := backend.LoadAll(context.TODO(), nil, be, h)
|
||||
if err == testErr {
|
||||
return
|
||||
}
|
||||
|
||||
@@ -49,7 +49,7 @@ func New(repo restic.Repository) *Checker {
|
||||
return c
|
||||
}
|
||||
|
||||
const defaultParallelism = 40
|
||||
const defaultParallelism = 5
|
||||
|
||||
// ErrDuplicatePacks is returned when a pack is found in more than one index.
|
||||
type ErrDuplicatePacks struct {
|
||||
@@ -74,82 +74,110 @@ func (err ErrOldIndexFormat) Error() string {
|
||||
// LoadIndex loads all index files.
|
||||
func (c *Checker) LoadIndex(ctx context.Context) (hints []error, errs []error) {
|
||||
debug.Log("Start")
|
||||
type indexRes struct {
|
||||
Index *repository.Index
|
||||
err error
|
||||
ID string
|
||||
|
||||
// track spawned goroutines using wg, create a new context which is
|
||||
// cancelled as soon as an error occurs.
|
||||
wg, ctx := errgroup.WithContext(ctx)
|
||||
|
||||
type FileInfo struct {
|
||||
restic.ID
|
||||
Size int64
|
||||
}
|
||||
|
||||
indexCh := make(chan indexRes)
|
||||
type Result struct {
|
||||
*repository.Index
|
||||
restic.ID
|
||||
Err error
|
||||
}
|
||||
|
||||
worker := func(ctx context.Context, id restic.ID) error {
|
||||
debug.Log("worker got index %v", id)
|
||||
idx, err := repository.LoadIndexWithDecoder(ctx, c.repo, id, repository.DecodeIndex)
|
||||
if errors.Cause(err) == repository.ErrOldIndexFormat {
|
||||
debug.Log("index %v has old format", id)
|
||||
hints = append(hints, ErrOldIndexFormat{id})
|
||||
ch := make(chan FileInfo)
|
||||
resultCh := make(chan Result)
|
||||
|
||||
idx, err = repository.LoadIndexWithDecoder(ctx, c.repo, id, repository.DecodeOldIndex)
|
||||
// send list of index files through ch, which is closed afterwards
|
||||
wg.Go(func() error {
|
||||
defer close(ch)
|
||||
return c.repo.List(ctx, restic.IndexFile, func(id restic.ID, size int64) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return nil
|
||||
case ch <- FileInfo{id, size}:
|
||||
}
|
||||
return nil
|
||||
})
|
||||
})
|
||||
|
||||
// a worker receives an index ID from ch, loads the index, and sends it to indexCh
|
||||
worker := func() error {
|
||||
var buf []byte
|
||||
for fi := range ch {
|
||||
debug.Log("worker got file %v", fi.ID.Str())
|
||||
var err error
|
||||
var idx *repository.Index
|
||||
idx, buf, err = repository.LoadIndexWithDecoder(ctx, c.repo, buf[:0], fi.ID, repository.DecodeIndex)
|
||||
if errors.Cause(err) == repository.ErrOldIndexFormat {
|
||||
debug.Log("index %v has old format", fi.ID.Str())
|
||||
hints = append(hints, ErrOldIndexFormat{fi.ID})
|
||||
|
||||
idx, buf, err = repository.LoadIndexWithDecoder(ctx, c.repo, buf[:0], fi.ID, repository.DecodeOldIndex)
|
||||
}
|
||||
|
||||
err = errors.Wrapf(err, "error loading index %v", fi.ID.Str())
|
||||
|
||||
select {
|
||||
case resultCh <- Result{idx, fi.ID, err}:
|
||||
case <-ctx.Done():
|
||||
}
|
||||
}
|
||||
|
||||
err = errors.Wrapf(err, "error loading index %v", id.Str())
|
||||
|
||||
select {
|
||||
case indexCh <- indexRes{Index: idx, ID: id.String(), err: err}:
|
||||
case <-ctx.Done():
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
go func() {
|
||||
defer close(indexCh)
|
||||
debug.Log("start loading indexes in parallel")
|
||||
err := repository.FilesInParallel(ctx, c.repo.Backend(), restic.IndexFile, defaultParallelism,
|
||||
repository.ParallelWorkFuncParseID(worker))
|
||||
debug.Log("loading indexes finished, error: %v", err)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}()
|
||||
// final closes indexCh after all workers have terminated
|
||||
final := func() error {
|
||||
close(resultCh)
|
||||
return nil
|
||||
}
|
||||
|
||||
done := make(chan struct{})
|
||||
defer close(done)
|
||||
// run workers on ch
|
||||
wg.Go(func() error {
|
||||
return repository.RunWorkers(ctx, defaultParallelism, worker, final)
|
||||
})
|
||||
|
||||
// receive decoded indexes
|
||||
packToIndex := make(map[restic.ID]restic.IDSet)
|
||||
wg.Go(func() error {
|
||||
for res := range resultCh {
|
||||
debug.Log("process index %v, err %v", res.ID, res.Err)
|
||||
|
||||
for res := range indexCh {
|
||||
debug.Log("process index %v, err %v", res.ID, res.err)
|
||||
|
||||
if res.err != nil {
|
||||
errs = append(errs, res.err)
|
||||
continue
|
||||
}
|
||||
|
||||
idxID, err := restic.ParseID(res.ID)
|
||||
if err != nil {
|
||||
errs = append(errs, errors.Errorf("unable to parse as index ID: %v", res.ID))
|
||||
continue
|
||||
}
|
||||
|
||||
c.indexes[idxID] = res.Index
|
||||
c.masterIndex.Insert(res.Index)
|
||||
|
||||
debug.Log("process blobs")
|
||||
cnt := 0
|
||||
for blob := range res.Index.Each(ctx) {
|
||||
c.packs.Insert(blob.PackID)
|
||||
c.blobs.Insert(blob.ID)
|
||||
c.blobRefs.M[blob.ID] = 0
|
||||
cnt++
|
||||
|
||||
if _, ok := packToIndex[blob.PackID]; !ok {
|
||||
packToIndex[blob.PackID] = restic.NewIDSet()
|
||||
if res.Err != nil {
|
||||
errs = append(errs, res.Err)
|
||||
continue
|
||||
}
|
||||
packToIndex[blob.PackID].Insert(idxID)
|
||||
}
|
||||
|
||||
debug.Log("%d blobs processed", cnt)
|
||||
c.indexes[res.ID] = res.Index
|
||||
c.masterIndex.Insert(res.Index)
|
||||
|
||||
debug.Log("process blobs")
|
||||
cnt := 0
|
||||
for blob := range res.Index.Each(ctx) {
|
||||
c.packs.Insert(blob.PackID)
|
||||
c.blobs.Insert(blob.ID)
|
||||
c.blobRefs.M[blob.ID] = 0
|
||||
cnt++
|
||||
|
||||
if _, ok := packToIndex[blob.PackID]; !ok {
|
||||
packToIndex[blob.PackID] = restic.NewIDSet()
|
||||
}
|
||||
packToIndex[blob.PackID].Insert(res.ID)
|
||||
}
|
||||
|
||||
debug.Log("%d blobs processed", cnt)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
err := wg.Wait()
|
||||
if err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
|
||||
debug.Log("checking for duplicate packs")
|
||||
@@ -163,7 +191,7 @@ func (c *Checker) LoadIndex(ctx context.Context) (hints []error, errs []error) {
|
||||
}
|
||||
}
|
||||
|
||||
err := c.repo.SetIndex(c.masterIndex)
|
||||
err = c.repo.SetIndex(c.masterIndex)
|
||||
if err != nil {
|
||||
debug.Log("SetIndex returned error: %v", err)
|
||||
errs = append(errs, err)
|
||||
@@ -281,31 +309,52 @@ func loadSnapshotTreeIDs(ctx context.Context, repo restic.Repository) (restic.ID
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
snapshotWorker := func(ctx context.Context, strID string) error {
|
||||
id, err := restic.ParseID(strID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// track spawned goroutines using wg, create a new context which is
|
||||
// cancelled as soon as an error occurs.
|
||||
wg, ctx := errgroup.WithContext(ctx)
|
||||
|
||||
debug.Log("load snapshot %v", id)
|
||||
ch := make(chan restic.ID)
|
||||
|
||||
treeID, err := loadTreeFromSnapshot(ctx, repo, id)
|
||||
if err != nil {
|
||||
errs.Lock()
|
||||
errs.errs = append(errs.errs, err)
|
||||
errs.Unlock()
|
||||
// send list of index files through ch, which is closed afterwards
|
||||
wg.Go(func() error {
|
||||
defer close(ch)
|
||||
return repo.List(ctx, restic.SnapshotFile, func(id restic.ID, size int64) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return nil
|
||||
case ch <- id:
|
||||
}
|
||||
return nil
|
||||
})
|
||||
})
|
||||
|
||||
// a worker receives an index ID from ch, loads the snapshot and the tree,
|
||||
// and adds the result to errs and trees.
|
||||
worker := func() error {
|
||||
for id := range ch {
|
||||
debug.Log("load snapshot %v", id)
|
||||
|
||||
treeID, err := loadTreeFromSnapshot(ctx, repo, id)
|
||||
if err != nil {
|
||||
errs.Lock()
|
||||
errs.errs = append(errs.errs, err)
|
||||
errs.Unlock()
|
||||
continue
|
||||
}
|
||||
|
||||
debug.Log("snapshot %v has tree %v", id, treeID)
|
||||
trees.Lock()
|
||||
trees.IDs = append(trees.IDs, treeID)
|
||||
trees.Unlock()
|
||||
}
|
||||
|
||||
debug.Log("snapshot %v has tree %v", id, treeID)
|
||||
trees.Lock()
|
||||
trees.IDs = append(trees.IDs, treeID)
|
||||
trees.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
err := repository.FilesInParallel(ctx, repo.Backend(), restic.SnapshotFile, defaultParallelism, snapshotWorker)
|
||||
for i := 0; i < defaultParallelism; i++ {
|
||||
wg.Go(worker)
|
||||
}
|
||||
|
||||
err := wg.Wait()
|
||||
if err != nil {
|
||||
errs.errs = append(errs.errs, err)
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package fs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path"
|
||||
@@ -19,10 +20,13 @@ type Reader struct {
|
||||
Name string
|
||||
io.ReadCloser
|
||||
|
||||
// for FileInfo
|
||||
Mode os.FileMode
|
||||
ModTime time.Time
|
||||
Size int64
|
||||
|
||||
AllowEmptyFile bool
|
||||
|
||||
open sync.Once
|
||||
}
|
||||
|
||||
@@ -40,7 +44,7 @@ func (fs *Reader) Open(name string) (f File, err error) {
|
||||
switch name {
|
||||
case fs.Name:
|
||||
fs.open.Do(func() {
|
||||
f = newReaderFile(fs.ReadCloser, fs.fi())
|
||||
f = newReaderFile(fs.ReadCloser, fs.fi(), fs.AllowEmptyFile)
|
||||
})
|
||||
|
||||
if f == nil {
|
||||
@@ -78,7 +82,7 @@ func (fs *Reader) OpenFile(name string, flag int, perm os.FileMode) (f File, err
|
||||
}
|
||||
|
||||
fs.open.Do(func() {
|
||||
f = newReaderFile(fs.ReadCloser, fs.fi())
|
||||
f = newReaderFile(fs.ReadCloser, fs.fi(), fs.AllowEmptyFile)
|
||||
})
|
||||
|
||||
if f == nil {
|
||||
@@ -158,9 +162,10 @@ func (fs *Reader) Dir(p string) string {
|
||||
return path.Dir(p)
|
||||
}
|
||||
|
||||
func newReaderFile(rd io.ReadCloser, fi os.FileInfo) readerFile {
|
||||
return readerFile{
|
||||
ReadCloser: rd,
|
||||
func newReaderFile(rd io.ReadCloser, fi os.FileInfo, allowEmptyFile bool) *readerFile {
|
||||
return &readerFile{
|
||||
ReadCloser: rd,
|
||||
AllowEmptyFile: allowEmptyFile,
|
||||
fakeFile: fakeFile{
|
||||
FileInfo: fi,
|
||||
name: fi.Name(),
|
||||
@@ -170,19 +175,41 @@ func newReaderFile(rd io.ReadCloser, fi os.FileInfo) readerFile {
|
||||
|
||||
type readerFile struct {
|
||||
io.ReadCloser
|
||||
AllowEmptyFile, bytesRead bool
|
||||
|
||||
fakeFile
|
||||
}
|
||||
|
||||
func (r readerFile) Read(p []byte) (int, error) {
|
||||
return r.ReadCloser.Read(p)
|
||||
// ErrFileEmpty is returned inside a *os.PathError by Read() for the file
|
||||
// opened from the fs provided by Reader when no data could be read and
|
||||
// AllowEmptyFile is not set.
|
||||
var ErrFileEmpty = errors.New("no data read")
|
||||
|
||||
func (r *readerFile) Read(p []byte) (int, error) {
|
||||
n, err := r.ReadCloser.Read(p)
|
||||
if n > 0 {
|
||||
r.bytesRead = true
|
||||
}
|
||||
|
||||
// return an error if we did not read any data
|
||||
if err == io.EOF && !r.AllowEmptyFile && !r.bytesRead {
|
||||
fmt.Printf("reader: %d bytes read, err %v, bytesRead %v, allowEmpty %v\n", n, err, r.bytesRead, r.AllowEmptyFile)
|
||||
return n, &os.PathError{
|
||||
Path: r.fakeFile.name,
|
||||
Op: "read",
|
||||
Err: ErrFileEmpty,
|
||||
}
|
||||
}
|
||||
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (r readerFile) Close() error {
|
||||
func (r *readerFile) Close() error {
|
||||
return r.ReadCloser.Close()
|
||||
}
|
||||
|
||||
// ensure that readerFile implements File
|
||||
var _ File = readerFile{}
|
||||
var _ File = &readerFile{}
|
||||
|
||||
// fakeFile implements all File methods, but only returns errors for anything
|
||||
// except Stat() and Name().
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -317,3 +318,66 @@ func TestFSReader(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestFSReaderMinFileSize(t *testing.T) {
|
||||
var tests = []struct {
|
||||
name string
|
||||
data string
|
||||
allowEmpty bool
|
||||
readMustErr bool
|
||||
}{
|
||||
{
|
||||
name: "regular",
|
||||
data: "foobar",
|
||||
},
|
||||
{
|
||||
name: "empty",
|
||||
data: "",
|
||||
allowEmpty: false,
|
||||
readMustErr: true,
|
||||
},
|
||||
{
|
||||
name: "empty2",
|
||||
data: "",
|
||||
allowEmpty: true,
|
||||
readMustErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
fs := &Reader{
|
||||
Name: "testfile",
|
||||
ReadCloser: ioutil.NopCloser(strings.NewReader(test.data)),
|
||||
Mode: 0644,
|
||||
ModTime: time.Now(),
|
||||
AllowEmptyFile: test.allowEmpty,
|
||||
}
|
||||
|
||||
f, err := fs.Open("testfile")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
buf, err := ioutil.ReadAll(f)
|
||||
if test.readMustErr {
|
||||
if err == nil {
|
||||
t.Fatal("expected error not found, got nil")
|
||||
}
|
||||
} else {
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
if string(buf) != test.data {
|
||||
t.Fatalf("wrong data returned, want %q, got %q", test.data, string(buf))
|
||||
}
|
||||
|
||||
err = f.Close()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,141 +0,0 @@
|
||||
package mock
|
||||
|
||||
import (
|
||||
"github.com/restic/restic/internal/crypto"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
)
|
||||
|
||||
// Repository implements a mock Repository.
|
||||
type Repository struct {
|
||||
BackendFn func() restic.Backend
|
||||
|
||||
KeyFn func() *crypto.Key
|
||||
|
||||
SetIndexFn func(restic.Index) error
|
||||
|
||||
IndexFn func() restic.Index
|
||||
SaveFullIndexFn func() error
|
||||
SaveIndexFn func() error
|
||||
LoadIndexFn func() error
|
||||
|
||||
ConfigFn func() restic.Config
|
||||
|
||||
LookupBlobSizeFn func(restic.ID, restic.BlobType) (uint, error)
|
||||
|
||||
ListFn func(restic.FileType, <-chan struct{}) <-chan restic.ID
|
||||
ListPackFn func(restic.ID) ([]restic.Blob, int64, error)
|
||||
|
||||
FlushFn func() error
|
||||
|
||||
SaveUnpackedFn func(restic.FileType, []byte) (restic.ID, error)
|
||||
SaveJSONUnpackedFn func(restic.FileType, interface{}) (restic.ID, error)
|
||||
|
||||
LoadJSONUnpackedFn func(restic.FileType, restic.ID, interface{}) error
|
||||
LoadAndDecryptFn func(restic.FileType, restic.ID) ([]byte, error)
|
||||
|
||||
LoadBlobFn func(restic.BlobType, restic.ID, []byte) (int, error)
|
||||
SaveBlobFn func(restic.BlobType, []byte, restic.ID) (restic.ID, error)
|
||||
|
||||
LoadTreeFn func(restic.ID) (*restic.Tree, error)
|
||||
SaveTreeFn func(t *restic.Tree) (restic.ID, error)
|
||||
}
|
||||
|
||||
// Backend is a stub method.
|
||||
func (repo Repository) Backend() restic.Backend {
|
||||
return repo.BackendFn()
|
||||
}
|
||||
|
||||
// Key is a stub method.
|
||||
func (repo Repository) Key() *crypto.Key {
|
||||
return repo.KeyFn()
|
||||
}
|
||||
|
||||
// SetIndex is a stub method.
|
||||
func (repo Repository) SetIndex(idx restic.Index) error {
|
||||
return repo.SetIndexFn(idx)
|
||||
}
|
||||
|
||||
// Index is a stub method.
|
||||
func (repo Repository) Index() restic.Index {
|
||||
return repo.IndexFn()
|
||||
}
|
||||
|
||||
// SaveFullIndex is a stub method.
|
||||
func (repo Repository) SaveFullIndex() error {
|
||||
return repo.SaveFullIndexFn()
|
||||
}
|
||||
|
||||
// SaveIndex is a stub method.
|
||||
func (repo Repository) SaveIndex() error {
|
||||
return repo.SaveIndexFn()
|
||||
}
|
||||
|
||||
// LoadIndex is a stub method.
|
||||
func (repo Repository) LoadIndex() error {
|
||||
return repo.LoadIndexFn()
|
||||
}
|
||||
|
||||
// Config is a stub method.
|
||||
func (repo Repository) Config() restic.Config {
|
||||
return repo.ConfigFn()
|
||||
}
|
||||
|
||||
// LookupBlobSize is a stub method.
|
||||
func (repo Repository) LookupBlobSize(id restic.ID, t restic.BlobType) (uint, error) {
|
||||
return repo.LookupBlobSizeFn(id, t)
|
||||
}
|
||||
|
||||
// List is a stub method.
|
||||
func (repo Repository) List(t restic.FileType, done <-chan struct{}) <-chan restic.ID {
|
||||
return repo.ListFn(t, done)
|
||||
}
|
||||
|
||||
// ListPack is a stub method.
|
||||
func (repo Repository) ListPack(id restic.ID) ([]restic.Blob, int64, error) {
|
||||
return repo.ListPackFn(id)
|
||||
}
|
||||
|
||||
// Flush is a stub method.
|
||||
func (repo Repository) Flush() error {
|
||||
return repo.FlushFn()
|
||||
}
|
||||
|
||||
// SaveUnpacked is a stub method.
|
||||
func (repo Repository) SaveUnpacked(t restic.FileType, buf []byte) (restic.ID, error) {
|
||||
return repo.SaveUnpackedFn(t, buf)
|
||||
}
|
||||
|
||||
// SaveJSONUnpacked is a stub method.
|
||||
func (repo Repository) SaveJSONUnpacked(t restic.FileType, item interface{}) (restic.ID, error) {
|
||||
return repo.SaveJSONUnpackedFn(t, item)
|
||||
}
|
||||
|
||||
// LoadJSONUnpacked is a stub method.
|
||||
func (repo Repository) LoadJSONUnpacked(t restic.FileType, id restic.ID, item interface{}) error {
|
||||
return repo.LoadJSONUnpackedFn(t, id, item)
|
||||
}
|
||||
|
||||
// LoadAndDecrypt is a stub method.
|
||||
func (repo Repository) LoadAndDecrypt(t restic.FileType, id restic.ID) ([]byte, error) {
|
||||
return repo.LoadAndDecryptFn(t, id)
|
||||
}
|
||||
|
||||
// LoadBlob is a stub method.
|
||||
func (repo Repository) LoadBlob(t restic.BlobType, id restic.ID, buf []byte) (int, error) {
|
||||
return repo.LoadBlobFn(t, id, buf)
|
||||
}
|
||||
|
||||
// SaveBlob is a stub method.
|
||||
func (repo Repository) SaveBlob(t restic.BlobType, buf []byte, id restic.ID) (restic.ID, error) {
|
||||
return repo.SaveBlobFn(t, buf, id)
|
||||
}
|
||||
|
||||
// LoadTree is a stub method.
|
||||
func (repo Repository) LoadTree(id restic.ID) (*restic.Tree, error) {
|
||||
return repo.LoadTreeFn(id)
|
||||
}
|
||||
|
||||
// SaveTree is a stub method.
|
||||
func (repo Repository) SaveTree(t *restic.Tree) (restic.ID, error) {
|
||||
return repo.SaveTreeFn(t)
|
||||
}
|
||||
@@ -549,21 +549,21 @@ func DecodeOldIndex(buf []byte) (idx *Index, err error) {
|
||||
}
|
||||
|
||||
// LoadIndexWithDecoder loads the index and decodes it with fn.
|
||||
func LoadIndexWithDecoder(ctx context.Context, repo restic.Repository, id restic.ID, fn func([]byte) (*Index, error)) (idx *Index, err error) {
|
||||
func LoadIndexWithDecoder(ctx context.Context, repo restic.Repository, buf []byte, id restic.ID, fn func([]byte) (*Index, error)) (*Index, []byte, error) {
|
||||
debug.Log("Loading index %v", id)
|
||||
|
||||
buf, err := repo.LoadAndDecrypt(ctx, restic.IndexFile, id)
|
||||
buf, err := repo.LoadAndDecrypt(ctx, buf[:0], restic.IndexFile, id)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, buf[:0], err
|
||||
}
|
||||
|
||||
idx, err = fn(buf)
|
||||
idx, err := fn(buf)
|
||||
if err != nil {
|
||||
debug.Log("error while decoding index %v: %v", id, err)
|
||||
return nil, err
|
||||
return nil, buf[:0], err
|
||||
}
|
||||
|
||||
idx.id = id
|
||||
|
||||
return idx, nil
|
||||
return idx, buf, nil
|
||||
}
|
||||
|
||||
@@ -184,7 +184,7 @@ func SearchKey(ctx context.Context, s *Repository, password string, maxKeys int,
|
||||
// LoadKey loads a key from the backend.
|
||||
func LoadKey(ctx context.Context, s *Repository, name string) (k *Key, err error) {
|
||||
h := restic.Handle{Type: restic.KeyFile, Name: name}
|
||||
data, err := backend.LoadAll(ctx, s.be, h)
|
||||
data, err := backend.LoadAll(ctx, nil, s.be, h)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -1,65 +0,0 @@
|
||||
package repository
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/restic/restic/internal/debug"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"golang.org/x/sync/errgroup"
|
||||
)
|
||||
|
||||
// ParallelWorkFunc gets one file ID to work on. If an error is returned,
|
||||
// processing stops. When the contect is cancelled the function should return.
|
||||
type ParallelWorkFunc func(ctx context.Context, id string) error
|
||||
|
||||
// ParallelIDWorkFunc gets one restic.ID to work on. If an error is returned,
|
||||
// processing stops. When the context is cancelled the function should return.
|
||||
type ParallelIDWorkFunc func(ctx context.Context, id restic.ID) error
|
||||
|
||||
// FilesInParallel runs n workers of f in parallel, on the IDs that
|
||||
// repo.List(t) yields. If f returns an error, the process is aborted and the
|
||||
// first error is returned.
|
||||
func FilesInParallel(ctx context.Context, repo restic.Lister, t restic.FileType, n int, f ParallelWorkFunc) error {
|
||||
g, ctx := errgroup.WithContext(ctx)
|
||||
|
||||
ch := make(chan string, n)
|
||||
g.Go(func() error {
|
||||
defer close(ch)
|
||||
return repo.List(ctx, t, func(fi restic.FileInfo) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
case ch <- fi.Name:
|
||||
}
|
||||
return nil
|
||||
})
|
||||
})
|
||||
|
||||
for i := 0; i < n; i++ {
|
||||
g.Go(func() error {
|
||||
for name := range ch {
|
||||
err := f(ctx, name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
return g.Wait()
|
||||
}
|
||||
|
||||
// ParallelWorkFuncParseID converts a function that takes a restic.ID to a
|
||||
// function that takes a string. Filenames that do not parse as a restic.ID
|
||||
// are ignored.
|
||||
func ParallelWorkFuncParseID(f ParallelIDWorkFunc) ParallelWorkFunc {
|
||||
return func(ctx context.Context, s string) error {
|
||||
id, err := restic.ParseID(s)
|
||||
if err != nil {
|
||||
debug.Log("invalid ID %q: %v", id, err)
|
||||
return nil
|
||||
}
|
||||
|
||||
return f(ctx, id)
|
||||
}
|
||||
}
|
||||
@@ -1,129 +0,0 @@
|
||||
package repository_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"math/rand"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/restic/restic/internal/errors"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
|
||||
"github.com/restic/restic/internal/repository"
|
||||
rtest "github.com/restic/restic/internal/test"
|
||||
)
|
||||
|
||||
type testIDs []string
|
||||
|
||||
var lister = testIDs{
|
||||
"40bb581cd36de952985c97a3ff6b21df41ee897d4db2040354caa36a17ff5268",
|
||||
"2e15811a4d14ffac66d36a9ff456019d8de4c10c949d45b643f8477d17e92ff3",
|
||||
"70c11b3ed521ad6b76d905c002ca98b361fca06aca060a063432c7311155a4da",
|
||||
"8056a33e75dccdda701b6c989c7ed0cb71bbb6da13c6427fe5986f0896cc91c0",
|
||||
"79d8776200596aa0237b10d470f7b850b86f8a1a80988ef5c8bee2874ce992e2",
|
||||
"f9f1f29791c6b79b90b35efd083f17a3b163bbbafb1a2fdf43d46d56cffda289",
|
||||
"3834178d05d0f6dd07f872ee0262ff1ace0f0f375768227d3c902b0b66591369",
|
||||
"66d5cc68c9186414806f366ae5493ce7f229212993750a4992be4030f6af28c5",
|
||||
"ebca5af4f397944f68cd215e3dfa2b197a7ba0f7c17d65d9f7390d0a15cde296",
|
||||
"d4511ce6ff732d106275a57e40745c599e987c0da44c42cddbef592aac102437",
|
||||
"f366202f0bfeefaedd7b49e2f21a90d3cbddb97d257a74d788dd34e19a684dae",
|
||||
"a5c17728ab2433cd50636dd5c6c7068c7a44f2999d09c46e8f528466da8a059d",
|
||||
"bae0f9492b9b208233029b87692a1a55cbd7fbe1cf3f6d7bc693ac266a6d6f0e",
|
||||
"9d500187913c7510d71d1902703d312c7aaa56f1e98351385b9535fdabae595e",
|
||||
"ffbddd8a4c1e54d258bb3e16d3929b546b61af63cb560b3e3061a8bef5b24552",
|
||||
"201bb3abf655e7ef71e79ed4fb1079b0502b5acb4d9fad5e72a0de690c50a386",
|
||||
"08eb57bbd559758ea96e99f9b7688c30e7b3bcf0c4562ff4535e2d8edeffaeed",
|
||||
"e50b7223b04985ff38d9e11d1cba333896ef4264f82bd5d0653a028bce70e542",
|
||||
"65a9421cd59cc7b7a71dcd9076136621af607fb4701d2e5c2af23b6396cf2f37",
|
||||
"995a655b3521c19b4d0c266222266d89c8fc62889597d61f45f336091e646d57",
|
||||
"51ec6f0bce77ed97df2dd7ae849338c3a8155a057da927eedd66e3d61be769ad",
|
||||
"7b3923a0c0666431efecdbf6cb171295ec1710b6595eebcba3b576b49d13e214",
|
||||
"2cedcc3d14698bea7e4b0546f7d5d48951dd90add59e6f2d44b693fd8913717d",
|
||||
"fd6770cbd54858fdbd3d7b4239b985e5599180064d93ca873f27e86e8407d011",
|
||||
"9edc51d8e6e04d05c9757848c1bfbfdc8e86b6330982294632488922e59fdb1b",
|
||||
"1a6c4fbb24ad724c968b2020417c3d057e6c89e49bdfb11d91006def65eab6a0",
|
||||
"cb3b29808cd0adfa2dca1f3a04f98114fbccf4eb487cdd4022f49bd70eeb049b",
|
||||
"f55edcb40c619e29a20e432f8aaddc83a649be2c2d1941ccdc474cd2af03d490",
|
||||
"e8ccc1763a92de23566b95c3ad1414a098016ece69a885fc8a72782a7517d17c",
|
||||
"0fe2e3db8c5a12ad7101a63a0fffee901be54319cfe146bead7aec851722f82d",
|
||||
"36be45a6ae7c95ad97cee1b33023be324bce7a7b4b7036e24125679dd9ff5b44",
|
||||
"1685ed1a57c37859fbef1f7efb7509f20b84ec17a765605de43104d2fa37884b",
|
||||
"9d83629a6a004c505b100a0b5d0b246833b63aa067aa9b59e3abd6b74bc4d3a8",
|
||||
"be49a66b60175c5e2ee273b42165f86ef11bb6518c1c79950bcd3f4c196c98bd",
|
||||
"0fd89885d821761b4a890782908e75793028747d15ace3c6cbf0ad56582b4fa5",
|
||||
"94a767519a4e352a88796604943841fea21429f3358b4d5d55596dbda7d15dce",
|
||||
"8dd07994afe6e572ddc9698fb0d13a0d4c26a38b7992818a71a99d1e0ac2b034",
|
||||
"f7380a6f795ed31fbeb2945c72c5fd1d45044e5ab152311e75e007fa530f5847",
|
||||
"5ca1ce01458e484393d7e9c8af42b0ff37a73a2fee0f18e14cff0fb180e33014",
|
||||
"8f44178be3fe0a2bd41f922576fb7a9b19d589754504be746f56c759df328fda",
|
||||
"12d33847c2be711c989f37360dd7aa8537fd14972262a4530634a08fdf32a767",
|
||||
"31e077f5080f78846a00093caff2b6b839519cc47516142eeba9c41d4072a605",
|
||||
"14f01db8a0054e70222b76d2555d70114b4bf8a0f02084324af2df226f14a795",
|
||||
"7f5dbbaf31b4551828e8e76cef408375db9fbcdcdb6b5949f2d1b0c4b8632132",
|
||||
"42a5d9b9bb7e4a16f23ba916bcf87f38c1aa1f2de2ab79736f725850a8ff6a1b",
|
||||
"e06f8f901ea708beba8712a11b6e2d0be7c4b018d0254204ef269bcdf5e8c6cc",
|
||||
"d9ba75785bf45b0c4fd3b2365c968099242483f2f0d0c7c20306dac11fae96e9",
|
||||
"428debbb280873907cef2ec099efe1566e42a59775d6ec74ded0c4048d5a6515",
|
||||
"3b51049d4dae701098e55a69536fa31ad2be1adc17b631a695a40e8a294fe9c0",
|
||||
"168f88aa4b105e9811f5f79439cc1a689be4eec77f3361d42f22fe8f7ddc74a9",
|
||||
"0baa0ab2249b33d64449a899cb7bd8eae5231f0d4ff70f09830dc1faa2e4abee",
|
||||
"0c3896d346b580306a49de29f3a78913a41e14b8461b124628c33a64636241f2",
|
||||
"b18313f1651c15e100e7179aa3eb8ffa62c3581159eaf7f83156468d19781e42",
|
||||
"996361f7d988e48267ccc7e930fed4637be35fe7562b8601dceb7a32313a14c8",
|
||||
"dfb4e6268437d53048d22b811048cd045df15693fc6789affd002a0fc80a6e60",
|
||||
"34dd044c228727f2226a0c9c06a3e5ceb5e30e31cb7854f8fa1cde846b395a58",
|
||||
}
|
||||
|
||||
func (tests testIDs) List(ctx context.Context, t restic.FileType, fn func(restic.FileInfo) error) error {
|
||||
for i := 0; i < 500; i++ {
|
||||
for _, id := range tests {
|
||||
if ctx.Err() != nil {
|
||||
return ctx.Err()
|
||||
}
|
||||
|
||||
fi := restic.FileInfo{
|
||||
Name: id,
|
||||
}
|
||||
|
||||
err := fn(fi)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestFilesInParallel(t *testing.T) {
|
||||
f := func(ctx context.Context, id string) error {
|
||||
time.Sleep(1 * time.Millisecond)
|
||||
return nil
|
||||
}
|
||||
|
||||
for n := 1; n < 5; n++ {
|
||||
err := repository.FilesInParallel(context.TODO(), lister, restic.DataFile, n*100, f)
|
||||
rtest.OK(t, err)
|
||||
}
|
||||
}
|
||||
|
||||
var errTest = errors.New("test error")
|
||||
|
||||
func TestFilesInParallelWithError(t *testing.T) {
|
||||
f := func(ctx context.Context, id string) error {
|
||||
time.Sleep(1 * time.Millisecond)
|
||||
|
||||
if rand.Float32() < 0.01 {
|
||||
return errTest
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
for n := 1; n < 5; n++ {
|
||||
err := repository.FilesInParallel(context.TODO(), lister, restic.DataFile, n*100, f)
|
||||
if err != errTest {
|
||||
t.Fatalf("wrong error returned, want %q, got %v", errTest, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -9,7 +9,6 @@ import (
|
||||
"io"
|
||||
"os"
|
||||
|
||||
"github.com/restic/restic/internal/backend"
|
||||
"github.com/restic/restic/internal/cache"
|
||||
"github.com/restic/restic/internal/crypto"
|
||||
"github.com/restic/restic/internal/debug"
|
||||
@@ -18,6 +17,7 @@ import (
|
||||
"github.com/restic/restic/internal/hashing"
|
||||
"github.com/restic/restic/internal/pack"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"golang.org/x/sync/errgroup"
|
||||
)
|
||||
|
||||
// Repository is used to access a repository in a backend.
|
||||
@@ -66,15 +66,29 @@ func (r *Repository) PrefixLength(t restic.FileType) (int, error) {
|
||||
return restic.PrefixLength(r.be, t)
|
||||
}
|
||||
|
||||
// LoadAndDecrypt loads and decrypts data identified by t and id from the
|
||||
// backend.
|
||||
func (r *Repository) LoadAndDecrypt(ctx context.Context, t restic.FileType, id restic.ID) (buf []byte, err error) {
|
||||
// LoadAndDecrypt loads and decrypts the file with the given type and ID, using
|
||||
// the supplied buffer (which must be empty). If the buffer is nil, a new
|
||||
// buffer will be allocated and returned.
|
||||
func (r *Repository) LoadAndDecrypt(ctx context.Context, buf []byte, t restic.FileType, id restic.ID) ([]byte, error) {
|
||||
if len(buf) != 0 {
|
||||
panic("buf is not empty")
|
||||
}
|
||||
|
||||
debug.Log("load %v with id %v", t, id)
|
||||
|
||||
h := restic.Handle{Type: t, Name: id.String()}
|
||||
buf, err = backend.LoadAll(ctx, r.be, h)
|
||||
err := r.be.Load(ctx, h, 0, 0, func(rd io.Reader) error {
|
||||
// make sure this call is idempotent, in case an error occurs
|
||||
wr := bytes.NewBuffer(buf[:0])
|
||||
_, cerr := io.Copy(wr, rd)
|
||||
if cerr != nil {
|
||||
return cerr
|
||||
}
|
||||
buf = wr.Bytes()
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
debug.Log("error loading %v: %v", h, err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -187,7 +201,7 @@ func (r *Repository) loadBlob(ctx context.Context, id restic.ID, t restic.BlobTy
|
||||
// LoadJSONUnpacked decrypts the data and afterwards calls json.Unmarshal on
|
||||
// the item.
|
||||
func (r *Repository) LoadJSONUnpacked(ctx context.Context, t restic.FileType, id restic.ID, item interface{}) (err error) {
|
||||
buf, err := r.LoadAndDecrypt(ctx, t, id)
|
||||
buf, err := r.LoadAndDecrypt(ctx, nil, t, id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -391,45 +405,86 @@ const loadIndexParallelism = 4
|
||||
func (r *Repository) LoadIndex(ctx context.Context) error {
|
||||
debug.Log("Loading index")
|
||||
|
||||
errCh := make(chan error, 1)
|
||||
indexes := make(chan *Index)
|
||||
// track spawned goroutines using wg, create a new context which is
|
||||
// cancelled as soon as an error occurs.
|
||||
wg, ctx := errgroup.WithContext(ctx)
|
||||
|
||||
worker := func(ctx context.Context, id restic.ID) error {
|
||||
idx, err := LoadIndex(ctx, r, id)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "%v, ignoring\n", err)
|
||||
type FileInfo struct {
|
||||
restic.ID
|
||||
Size int64
|
||||
}
|
||||
ch := make(chan FileInfo)
|
||||
indexCh := make(chan *Index)
|
||||
|
||||
// send list of index files through ch, which is closed afterwards
|
||||
wg.Go(func() error {
|
||||
defer close(ch)
|
||||
return r.List(ctx, restic.IndexFile, func(id restic.ID, size int64) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return nil
|
||||
case ch <- FileInfo{id, size}:
|
||||
}
|
||||
return nil
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
select {
|
||||
case indexes <- idx:
|
||||
case <-ctx.Done():
|
||||
// a worker receives an index ID from ch, loads the index, and sends it to indexCh
|
||||
worker := func() error {
|
||||
var buf []byte
|
||||
for fi := range ch {
|
||||
var err error
|
||||
var idx *Index
|
||||
idx, buf, err = LoadIndexWithDecoder(ctx, r, buf[:0], fi.ID, DecodeIndex)
|
||||
if err != nil && errors.Cause(err) == ErrOldIndexFormat {
|
||||
idx, buf, err = LoadIndexWithDecoder(ctx, r, buf[:0], fi.ID, DecodeOldIndex)
|
||||
}
|
||||
|
||||
select {
|
||||
case indexCh <- idx:
|
||||
case <-ctx.Done():
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
go func() {
|
||||
defer close(indexes)
|
||||
errCh <- FilesInParallel(ctx, r.be, restic.IndexFile, loadIndexParallelism,
|
||||
ParallelWorkFuncParseID(worker))
|
||||
}()
|
||||
|
||||
validIndex := restic.NewIDSet()
|
||||
for idx := range indexes {
|
||||
id, err := idx.ID()
|
||||
if err == nil {
|
||||
validIndex.Insert(id)
|
||||
}
|
||||
r.idx.Insert(idx)
|
||||
// final closes indexCh after all workers have terminated
|
||||
final := func() error {
|
||||
close(indexCh)
|
||||
return nil
|
||||
}
|
||||
|
||||
err := r.PrepareCache(validIndex)
|
||||
// run workers on ch
|
||||
wg.Go(func() error {
|
||||
return RunWorkers(ctx, loadIndexParallelism, worker, final)
|
||||
})
|
||||
|
||||
// receive decoded indexes
|
||||
validIndex := restic.NewIDSet()
|
||||
wg.Go(func() error {
|
||||
for idx := range indexCh {
|
||||
id, err := idx.ID()
|
||||
if err == nil {
|
||||
validIndex.Insert(id)
|
||||
}
|
||||
r.idx.Insert(idx)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
err := wg.Wait()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return <-errCh
|
||||
// remove index files from the cache which have been removed in the repo
|
||||
err = r.PrepareCache(validIndex)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// PrepareCache initializes the local cache. indexIDs is the list of IDs of
|
||||
@@ -495,14 +550,15 @@ func (r *Repository) PrepareCache(indexIDs restic.IDSet) error {
|
||||
|
||||
// LoadIndex loads the index id from backend and returns it.
|
||||
func LoadIndex(ctx context.Context, repo restic.Repository, id restic.ID) (*Index, error) {
|
||||
idx, err := LoadIndexWithDecoder(ctx, repo, id, DecodeIndex)
|
||||
idx, _, err := LoadIndexWithDecoder(ctx, repo, nil, id, DecodeIndex)
|
||||
if err == nil {
|
||||
return idx, nil
|
||||
}
|
||||
|
||||
if errors.Cause(err) == ErrOldIndexFormat {
|
||||
fmt.Fprintf(os.Stderr, "index %v has old format\n", id.Str())
|
||||
return LoadIndexWithDecoder(ctx, repo, id, DecodeOldIndex)
|
||||
idx, _, err := LoadIndexWithDecoder(ctx, repo, nil, id, DecodeOldIndex)
|
||||
return idx, err
|
||||
}
|
||||
|
||||
return nil, err
|
||||
|
||||
@@ -244,7 +244,7 @@ func BenchmarkLoadAndDecrypt(b *testing.B) {
|
||||
b.SetBytes(int64(length))
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
data, err := repo.LoadAndDecrypt(context.TODO(), restic.DataFile, storageID)
|
||||
data, err := repo.LoadAndDecrypt(context.TODO(), nil, restic.DataFile, storageID)
|
||||
rtest.OK(b, err)
|
||||
if len(data) != length {
|
||||
b.Errorf("wanted %d bytes, got %d", length, len(data))
|
||||
|
||||
35
internal/repository/worker_group.go
Normal file
35
internal/repository/worker_group.go
Normal file
@@ -0,0 +1,35 @@
|
||||
package repository
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"golang.org/x/sync/errgroup"
|
||||
)
|
||||
|
||||
// RunWorkers runs count instances of workerFunc using an errgroup.Group.
|
||||
// After all workers have terminated, finalFunc is run. If an error occurs in
|
||||
// one of the workers, it is returned. FinalFunc is always run, regardless of
|
||||
// any other previous errors.
|
||||
func RunWorkers(ctx context.Context, count int, workerFunc, finalFunc func() error) error {
|
||||
wg, ctx := errgroup.WithContext(ctx)
|
||||
|
||||
// run workers
|
||||
for i := 0; i < count; i++ {
|
||||
wg.Go(workerFunc)
|
||||
}
|
||||
|
||||
// wait for termination
|
||||
err := wg.Wait()
|
||||
|
||||
// make sure finalFunc is run
|
||||
finalErr := finalFunc()
|
||||
|
||||
// if the workers returned an error, return it to the caller (disregarding
|
||||
// any error from finalFunc)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// if not, return the value finalFunc returned
|
||||
return finalErr
|
||||
}
|
||||
@@ -24,7 +24,7 @@ func Find(be Lister, t FileType, prefix string) (string, error) {
|
||||
defer cancel()
|
||||
|
||||
err := be.List(ctx, t, func(fi FileInfo) error {
|
||||
if prefix == fi.Name[:len(prefix)] {
|
||||
if len(fi.Name) >= len(prefix) && prefix == fi.Name[:len(prefix)] {
|
||||
if match == "" {
|
||||
match = fi.Name
|
||||
} else {
|
||||
|
||||
@@ -24,6 +24,57 @@ var samples = IDs{
|
||||
TestParseID("fa31d65b87affcd167b119e9d3d2a27b8236ca4836cb077ed3e96fcbe209b792"),
|
||||
}
|
||||
|
||||
func TestFind(t *testing.T) {
|
||||
list := samples
|
||||
|
||||
m := mockBackend{}
|
||||
m.list = func(ctx context.Context, t FileType, fn func(FileInfo) error) error {
|
||||
for _, id := range list {
|
||||
err := fn(FileInfo{Name: id.String()})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
f, err := Find(m, SnapshotFile, "20bdc1402a6fc9b633aa")
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
expected_match := "20bdc1402a6fc9b633aaffffffffffffffffffffffffffffffffffffffffffff"
|
||||
if f != expected_match {
|
||||
t.Errorf("Wrong match returned want %s, got %s", expected_match, f)
|
||||
}
|
||||
|
||||
f, err = Find(m, SnapshotFile, "NotAPrefix")
|
||||
if err != ErrNoIDPrefixFound {
|
||||
t.Error("Expected no snapshots to be found.")
|
||||
}
|
||||
if f != "" {
|
||||
t.Errorf("Find should not return a match on error.")
|
||||
}
|
||||
|
||||
// Try to match with a prefix longer than any ID.
|
||||
extra_length_id := samples[0].String() + "f"
|
||||
f, err = Find(m, SnapshotFile, extra_length_id)
|
||||
if err != ErrNoIDPrefixFound {
|
||||
t.Error("Expected no snapshots to be matched.")
|
||||
}
|
||||
if f != "" {
|
||||
t.Errorf("Find should not return a match on error.")
|
||||
}
|
||||
|
||||
// Use a prefix that will match the prefix of multiple Ids in `samples`.
|
||||
f, err = Find(m, SnapshotFile, "20bdc140")
|
||||
if err != ErrMultipleIDMatches {
|
||||
t.Error("Expected multiple snapshots to be matched.")
|
||||
}
|
||||
if f != "" {
|
||||
t.Errorf("Find should not return a match on error.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestPrefixLength(t *testing.T) {
|
||||
list := samples
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/restic/restic/internal/errors"
|
||||
@@ -101,13 +102,33 @@ func (id ID) MarshalJSON() ([]byte, error) {
|
||||
|
||||
// UnmarshalJSON parses the JSON-encoded data and stores the result in id.
|
||||
func (id *ID) UnmarshalJSON(b []byte) error {
|
||||
var s string
|
||||
err := json.Unmarshal(b, &s)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Unmarshal")
|
||||
// check string length
|
||||
if len(b) < 2 {
|
||||
return fmt.Errorf("invalid ID: %q", b)
|
||||
}
|
||||
|
||||
_, err = hex.Decode(id[:], []byte(s))
|
||||
if len(b)%2 != 0 {
|
||||
return fmt.Errorf("invalid ID length: %q", b)
|
||||
}
|
||||
|
||||
// check string delimiters
|
||||
if b[0] != '"' && b[0] != '\'' {
|
||||
return fmt.Errorf("invalid start of string: %q", b[0])
|
||||
}
|
||||
|
||||
last := len(b) - 1
|
||||
if b[0] != b[last] {
|
||||
return fmt.Errorf("starting string delimiter (%q) does not match end (%q)", b[0], b[last])
|
||||
}
|
||||
|
||||
// strip JSON string delimiters
|
||||
b = b[1:last]
|
||||
|
||||
if len(b) != 2*len(id) {
|
||||
return fmt.Errorf("invalid length for ID")
|
||||
}
|
||||
|
||||
_, err := hex.Decode(id[:], b)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "hex.Decode")
|
||||
}
|
||||
|
||||
@@ -51,10 +51,47 @@ func TestID(t *testing.T) {
|
||||
var id3 ID
|
||||
err = id3.UnmarshalJSON(buf)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
t.Fatalf("error for %q: %v", buf, err)
|
||||
}
|
||||
if !reflect.DeepEqual(id, id3) {
|
||||
t.Error("ids are not equal")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestIDUnmarshal(t *testing.T) {
|
||||
var tests = []struct {
|
||||
s string
|
||||
valid bool
|
||||
}{
|
||||
{`"`, false},
|
||||
{`""`, false},
|
||||
{`'`, false},
|
||||
{`"`, false},
|
||||
{`"c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4"`, false},
|
||||
{`"c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f"`, false},
|
||||
{`"c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2"`, true},
|
||||
}
|
||||
|
||||
wantID, err := ParseID("c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run("", func(t *testing.T) {
|
||||
id := &ID{}
|
||||
err := id.UnmarshalJSON([]byte(test.s))
|
||||
if test.valid && err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !test.valid && err == nil {
|
||||
t.Fatalf("want error for invalid value, got nil")
|
||||
}
|
||||
|
||||
if test.valid && !id.Equal(wantID) {
|
||||
t.Fatalf("wrong ID returned, want %s, got %s", wantID, id)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -321,11 +321,11 @@ func (node Node) createSymlinkAt(path string) error {
|
||||
}
|
||||
|
||||
func (node *Node) createDevAt(path string) error {
|
||||
return mknod(path, syscall.S_IFBLK|0600, int(node.Device))
|
||||
return mknod(path, syscall.S_IFBLK|0600, node.device())
|
||||
}
|
||||
|
||||
func (node *Node) createCharDevAt(path string) error {
|
||||
return mknod(path, syscall.S_IFCHR|0600, int(node.Device))
|
||||
return mknod(path, syscall.S_IFCHR|0600, node.device())
|
||||
}
|
||||
|
||||
func (node *Node) createFifoAt(path string) error {
|
||||
|
||||
@@ -6,6 +6,10 @@ func (node Node) restoreSymlinkTimestamps(path string, utimes [2]syscall.Timespe
|
||||
return nil
|
||||
}
|
||||
|
||||
func (node Node) device() int {
|
||||
return int(node.Device)
|
||||
}
|
||||
|
||||
func (s statUnix) atim() syscall.Timespec { return s.Atimespec }
|
||||
func (s statUnix) mtim() syscall.Timespec { return s.Mtimespec }
|
||||
func (s statUnix) ctim() syscall.Timespec { return s.Ctimespec }
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
// +build freebsd,go1.12
|
||||
|
||||
package restic
|
||||
|
||||
import "syscall"
|
||||
@@ -6,6 +8,10 @@ func (node Node) restoreSymlinkTimestamps(path string, utimes [2]syscall.Timespe
|
||||
return nil
|
||||
}
|
||||
|
||||
func (node Node) device() uint64 {
|
||||
return node.Device
|
||||
}
|
||||
|
||||
func (s statUnix) atim() syscall.Timespec { return s.Atimespec }
|
||||
func (s statUnix) mtim() syscall.Timespec { return s.Mtimespec }
|
||||
func (s statUnix) ctim() syscall.Timespec { return s.Ctimespec }
|
||||
|
||||
17
internal/restic/node_freebsd_go111.go
Normal file
17
internal/restic/node_freebsd_go111.go
Normal file
@@ -0,0 +1,17 @@
|
||||
// +build freebsd,!go1.12
|
||||
|
||||
package restic
|
||||
|
||||
import "syscall"
|
||||
|
||||
func (node Node) restoreSymlinkTimestamps(path string, utimes [2]syscall.Timespec) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (node Node) device() int {
|
||||
return int(node.Device)
|
||||
}
|
||||
|
||||
func (s statUnix) atim() syscall.Timespec { return s.Atimespec }
|
||||
func (s statUnix) mtim() syscall.Timespec { return s.Mtimespec }
|
||||
func (s statUnix) ctim() syscall.Timespec { return s.Ctimespec }
|
||||
@@ -32,6 +32,10 @@ func (node Node) restoreSymlinkTimestamps(path string, utimes [2]syscall.Timespe
|
||||
return nil
|
||||
}
|
||||
|
||||
func (node Node) device() int {
|
||||
return int(node.Device)
|
||||
}
|
||||
|
||||
func (s statUnix) atim() syscall.Timespec { return s.Atim }
|
||||
func (s statUnix) mtim() syscall.Timespec { return s.Mtim }
|
||||
func (s statUnix) ctim() syscall.Timespec { return s.Ctim }
|
||||
|
||||
@@ -6,6 +6,10 @@ func (node Node) restoreSymlinkTimestamps(path string, utimes [2]syscall.Timespe
|
||||
return nil
|
||||
}
|
||||
|
||||
func (node Node) device() int {
|
||||
return int(node.Device)
|
||||
}
|
||||
|
||||
func (s statUnix) atim() syscall.Timespec { return s.Atimespec }
|
||||
func (s statUnix) mtim() syscall.Timespec { return s.Mtimespec }
|
||||
func (s statUnix) ctim() syscall.Timespec { return s.Ctimespec }
|
||||
|
||||
@@ -6,6 +6,10 @@ func (node Node) restoreSymlinkTimestamps(path string, utimes [2]syscall.Timespe
|
||||
return nil
|
||||
}
|
||||
|
||||
func (node Node) device() int {
|
||||
return int(node.Device)
|
||||
}
|
||||
|
||||
func (s statUnix) atim() syscall.Timespec { return s.Atim }
|
||||
func (s statUnix) mtim() syscall.Timespec { return s.Mtim }
|
||||
func (s statUnix) ctim() syscall.Timespec { return s.Ctim }
|
||||
|
||||
@@ -6,6 +6,10 @@ func (node Node) restoreSymlinkTimestamps(path string, utimes [2]syscall.Timespe
|
||||
return nil
|
||||
}
|
||||
|
||||
func (node Node) device() int {
|
||||
return int(node.Device)
|
||||
}
|
||||
|
||||
func (s statUnix) atim() syscall.Timespec { return s.Atim }
|
||||
func (s statUnix) mtim() syscall.Timespec { return s.Mtim }
|
||||
func (s statUnix) ctim() syscall.Timespec { return s.Ctim }
|
||||
|
||||
@@ -22,6 +22,10 @@ func (node Node) restoreSymlinkTimestamps(path string, utimes [2]syscall.Timespe
|
||||
return nil
|
||||
}
|
||||
|
||||
func (node Node) device() int {
|
||||
return int(node.Device)
|
||||
}
|
||||
|
||||
// Getxattr retrieves extended attribute data associated with path.
|
||||
func Getxattr(path, name string) ([]byte, error) {
|
||||
return nil, nil
|
||||
|
||||
@@ -39,8 +39,11 @@ type Repository interface {
|
||||
SaveUnpacked(context.Context, FileType, []byte) (ID, error)
|
||||
SaveJSONUnpacked(context.Context, FileType, interface{}) (ID, error)
|
||||
|
||||
LoadJSONUnpacked(context.Context, FileType, ID, interface{}) error
|
||||
LoadAndDecrypt(context.Context, FileType, ID) ([]byte, error)
|
||||
LoadJSONUnpacked(ctx context.Context, t FileType, id ID, dest interface{}) error
|
||||
// LoadAndDecrypt loads and decrypts the file with the given type and ID,
|
||||
// using the supplied buffer (which must be empty). If the buffer is nil, a
|
||||
// new buffer will be allocated and returned.
|
||||
LoadAndDecrypt(ctx context.Context, buf []byte, t FileType, id ID) (data []byte, err error)
|
||||
|
||||
LoadBlob(context.Context, BlobType, ID, []byte) (int, error)
|
||||
SaveBlob(context.Context, BlobType, []byte, ID) (ID, error)
|
||||
|
||||
76
internal/restic/snapshot_group.go
Normal file
76
internal/restic/snapshot_group.go
Normal file
@@ -0,0 +1,76 @@
|
||||
package restic
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/restic/restic/internal/errors"
|
||||
)
|
||||
|
||||
// SnapshotGroupKey is the structure for identifying groups in a grouped
|
||||
// snapshot list. This is used by GroupSnapshots()
|
||||
type SnapshotGroupKey struct {
|
||||
Hostname string `json:"hostname"`
|
||||
Paths []string `json:"paths"`
|
||||
Tags []string `json:"tags"`
|
||||
}
|
||||
|
||||
// GroupSnapshots takes a list of snapshots and a grouping criteria and creates
|
||||
// a group list of snapshots.
|
||||
func GroupSnapshots(snapshots Snapshots, options string) (map[string]Snapshots, bool, error) {
|
||||
// group by hostname and dirs
|
||||
snapshotGroups := make(map[string]Snapshots)
|
||||
|
||||
var GroupByTag bool
|
||||
var GroupByHost bool
|
||||
var GroupByPath bool
|
||||
var GroupOptionList []string
|
||||
|
||||
GroupOptionList = strings.Split(options, ",")
|
||||
|
||||
for _, option := range GroupOptionList {
|
||||
switch option {
|
||||
case "host":
|
||||
GroupByHost = true
|
||||
case "paths":
|
||||
GroupByPath = true
|
||||
case "tags":
|
||||
GroupByTag = true
|
||||
case "":
|
||||
default:
|
||||
return nil, false, errors.Fatal("unknown grouping option: '" + option + "'")
|
||||
}
|
||||
}
|
||||
|
||||
for _, sn := range snapshots {
|
||||
// Determining grouping-keys
|
||||
var tags []string
|
||||
var hostname string
|
||||
var paths []string
|
||||
|
||||
if GroupByTag {
|
||||
tags = sn.Tags
|
||||
sort.StringSlice(tags).Sort()
|
||||
}
|
||||
if GroupByHost {
|
||||
hostname = sn.Hostname
|
||||
}
|
||||
if GroupByPath {
|
||||
paths = sn.Paths
|
||||
}
|
||||
|
||||
sort.StringSlice(sn.Paths).Sort()
|
||||
var k []byte
|
||||
var err error
|
||||
|
||||
k, err = json.Marshal(SnapshotGroupKey{Tags: tags, Hostname: hostname, Paths: paths})
|
||||
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
snapshotGroups[string(k)] = append(snapshotGroups[string(k)], sn)
|
||||
}
|
||||
|
||||
return snapshotGroups, GroupByTag || GroupByHost || GroupByPath, nil
|
||||
}
|
||||
@@ -23,8 +23,8 @@ import (
|
||||
const (
|
||||
workerCount = 8
|
||||
|
||||
// max number of open output file handles
|
||||
filesWriterCount = 32
|
||||
// max number of cached open output file handles
|
||||
filesWriterCacheCap = 32
|
||||
|
||||
// estimated average pack size used to calculate pack cache capacity
|
||||
averagePackSize = 5 * 1024 * 1024
|
||||
@@ -73,7 +73,7 @@ func newFileRestorer(dst string, packLoader func(ctx context.Context, h restic.H
|
||||
packLoader: packLoader,
|
||||
key: key,
|
||||
idx: idx,
|
||||
filesWriter: newFilesWriter(filesWriterCount),
|
||||
filesWriter: newFilesWriter(filesWriterCacheCap),
|
||||
packCache: newPackCache(packCacheCapacity),
|
||||
dst: dst,
|
||||
}
|
||||
|
||||
@@ -178,7 +178,8 @@ func restoreAndVerify(t *testing.T, tempdir string, content []TestFile) {
|
||||
continue
|
||||
}
|
||||
|
||||
rtest.Equals(t, false, r.filesWriter.writers.Contains(target))
|
||||
_, contains := r.filesWriter.cache[target]
|
||||
rtest.Equals(t, false, contains)
|
||||
|
||||
content := repo.fileContent(file)
|
||||
if !bytes.Equal(data, []byte(content)) {
|
||||
|
||||
@@ -1,36 +1,55 @@
|
||||
package restorer
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"github.com/hashicorp/golang-lru/simplelru"
|
||||
"github.com/restic/restic/internal/debug"
|
||||
"github.com/restic/restic/internal/errors"
|
||||
)
|
||||
|
||||
// Writes blobs to output files. Each file is written sequentially,
|
||||
// start to finish, but multiple files can be written to concurrently.
|
||||
// Implementation allows virtually unlimited number of logically open
|
||||
// files, but number of phisically open files will never exceed number
|
||||
// of concurrent writeToFile invocations plus cacheCap.
|
||||
type filesWriter struct {
|
||||
lock sync.Mutex // guards concurrent access
|
||||
lock sync.Mutex // guards concurrent access to open files cache
|
||||
inprogress map[string]struct{} // (logically) opened file writers
|
||||
writers simplelru.LRUCache // key: string, value: *os.File
|
||||
cache map[string]*os.File // cache of open files
|
||||
cacheCap int // max number of cached open files
|
||||
}
|
||||
|
||||
func newFilesWriter(count int) *filesWriter {
|
||||
writers, _ := simplelru.NewLRU(count, func(key interface{}, value interface{}) {
|
||||
value.(*os.File).Close()
|
||||
debug.Log("Closed and purged cached writer for %v", key)
|
||||
})
|
||||
return &filesWriter{inprogress: make(map[string]struct{}), writers: writers}
|
||||
func newFilesWriter(cacheCap int) *filesWriter {
|
||||
return &filesWriter{
|
||||
inprogress: make(map[string]struct{}),
|
||||
cache: make(map[string]*os.File),
|
||||
cacheCap: cacheCap,
|
||||
}
|
||||
}
|
||||
|
||||
func (w *filesWriter) writeToFile(path string, buf []byte) error {
|
||||
acquireWriter := func() (io.Writer, error) {
|
||||
func (w *filesWriter) writeToFile(path string, blob []byte) error {
|
||||
// First writeToFile invocation for any given path will:
|
||||
// - create and open the file
|
||||
// - write the blob to the file
|
||||
// - cache the open file if there is space, close the file otherwise
|
||||
// Subsequent invocations will:
|
||||
// - remove the open file from the cache _or_ open the file for append
|
||||
// - write the blob to the file
|
||||
// - cache the open file if there is space, close the file otherwise
|
||||
// The idea is to cap maximum number of open files with minimal
|
||||
// coordination among concurrent writeToFile invocations (note that
|
||||
// writeToFile never touches somebody else's open file).
|
||||
|
||||
// TODO measure if caching is useful (likely depends on operating system
|
||||
// and hardware configuration)
|
||||
acquireWriter := func() (*os.File, error) {
|
||||
w.lock.Lock()
|
||||
defer w.lock.Unlock()
|
||||
if wr, ok := w.writers.Get(path); ok {
|
||||
if wr, ok := w.cache[path]; ok {
|
||||
debug.Log("Used cached writer for %s", path)
|
||||
return wr.(*os.File), nil
|
||||
delete(w.cache, path)
|
||||
return wr, nil
|
||||
}
|
||||
var flags int
|
||||
if _, append := w.inprogress[path]; append {
|
||||
@@ -43,21 +62,30 @@ func (w *filesWriter) writeToFile(path string, buf []byte) error {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
w.writers.Add(path, wr)
|
||||
debug.Log("Opened and cached writer for %s", path)
|
||||
debug.Log("Opened writer for %s", path)
|
||||
return wr, nil
|
||||
}
|
||||
cacheOrCloseWriter := func(wr *os.File) {
|
||||
w.lock.Lock()
|
||||
defer w.lock.Unlock()
|
||||
if len(w.cache) < w.cacheCap {
|
||||
w.cache[path] = wr
|
||||
} else {
|
||||
wr.Close()
|
||||
}
|
||||
}
|
||||
|
||||
wr, err := acquireWriter()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
n, err := wr.Write(buf)
|
||||
n, err := wr.Write(blob)
|
||||
cacheOrCloseWriter(wr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if n != len(buf) {
|
||||
return errors.Errorf("error writing file %v: wrong length written, want %d, got %d", path, len(buf), n)
|
||||
if n != len(blob) {
|
||||
return errors.Errorf("error writing file %v: wrong length written, want %d, got %d", path, len(blob), n)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -65,6 +93,9 @@ func (w *filesWriter) writeToFile(path string, buf []byte) error {
|
||||
func (w *filesWriter) close(path string) {
|
||||
w.lock.Lock()
|
||||
defer w.lock.Unlock()
|
||||
w.writers.Remove(path)
|
||||
if wr, ok := w.cache[path]; ok {
|
||||
wr.Close()
|
||||
delete(w.cache, path)
|
||||
}
|
||||
delete(w.inprogress, path)
|
||||
}
|
||||
|
||||
@@ -17,21 +17,21 @@ func TestFilesWriterBasic(t *testing.T) {
|
||||
f2 := dir + "/f2"
|
||||
|
||||
rtest.OK(t, w.writeToFile(f1, []byte{1}))
|
||||
rtest.Equals(t, 1, w.writers.Len())
|
||||
rtest.Equals(t, 1, len(w.cache))
|
||||
rtest.Equals(t, 1, len(w.inprogress))
|
||||
|
||||
rtest.OK(t, w.writeToFile(f2, []byte{2}))
|
||||
rtest.Equals(t, 1, w.writers.Len())
|
||||
rtest.Equals(t, 1, len(w.cache))
|
||||
rtest.Equals(t, 2, len(w.inprogress))
|
||||
|
||||
rtest.OK(t, w.writeToFile(f1, []byte{1}))
|
||||
w.close(f1)
|
||||
rtest.Equals(t, 0, w.writers.Len())
|
||||
rtest.Equals(t, 0, len(w.cache))
|
||||
rtest.Equals(t, 1, len(w.inprogress))
|
||||
|
||||
rtest.OK(t, w.writeToFile(f2, []byte{2}))
|
||||
w.close(f2)
|
||||
rtest.Equals(t, 0, w.writers.Len())
|
||||
rtest.Equals(t, 0, len(w.cache))
|
||||
rtest.Equals(t, 0, len(w.inprogress))
|
||||
|
||||
buf, err := ioutil.ReadFile(f1)
|
||||
|
||||
@@ -112,9 +112,6 @@ func GitHubLatestRelease(ctx context.Context, owner, repo string) (Release, erro
|
||||
}
|
||||
|
||||
func getGithubData(ctx context.Context, url string) ([]byte, error) {
|
||||
ctx, cancel := context.WithTimeout(ctx, githubAPITimeout)
|
||||
defer cancel()
|
||||
|
||||
req, err := http.NewRequest(http.MethodGet, url, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
@@ -49,6 +49,7 @@ type Backup struct {
|
||||
Changed uint
|
||||
Unchanged uint
|
||||
}
|
||||
ProcessedBytes uint64
|
||||
archiver.ItemStats
|
||||
}
|
||||
}
|
||||
@@ -254,11 +255,17 @@ func formatBytes(c uint64) string {
|
||||
}
|
||||
}
|
||||
|
||||
// CompleteItemFn is the status callback function for the archiver when a
|
||||
// CompleteItem is the status callback function for the archiver when a
|
||||
// file/dir has been saved successfully.
|
||||
func (b *Backup) CompleteItemFn(item string, previous, current *restic.Node, s archiver.ItemStats, d time.Duration) {
|
||||
func (b *Backup) CompleteItem(item string, previous, current *restic.Node, s archiver.ItemStats, d time.Duration) {
|
||||
b.summary.Lock()
|
||||
b.summary.ItemStats.Add(s)
|
||||
|
||||
// for the last item "/", current is nil
|
||||
if current != nil {
|
||||
b.summary.ProcessedBytes += current.Size
|
||||
}
|
||||
|
||||
b.summary.Unlock()
|
||||
|
||||
if current == nil {
|
||||
@@ -349,7 +356,7 @@ func (b *Backup) ReportTotal(item string, s archiver.ScanStats) {
|
||||
}
|
||||
|
||||
// Finish prints the finishing messages.
|
||||
func (b *Backup) Finish() {
|
||||
func (b *Backup) Finish(snapshotID restic.ID) {
|
||||
close(b.finished)
|
||||
|
||||
b.P("\n")
|
||||
@@ -361,7 +368,13 @@ func (b *Backup) Finish() {
|
||||
b.P("\n")
|
||||
b.P("processed %v files, %v in %s",
|
||||
b.summary.Files.New+b.summary.Files.Changed+b.summary.Files.Unchanged,
|
||||
formatBytes(b.totalBytes),
|
||||
formatBytes(b.summary.ProcessedBytes),
|
||||
formatDuration(time.Since(b.start)),
|
||||
)
|
||||
}
|
||||
|
||||
// SetMinUpdatePause sets b.MinUpdatePause. It satisfies the
|
||||
// ArchiveProgressReporter interface.
|
||||
func (b *Backup) SetMinUpdatePause(d time.Duration) {
|
||||
b.MinUpdatePause = d
|
||||
}
|
||||
|
||||
420
internal/ui/jsonstatus/status.go
Normal file
420
internal/ui/jsonstatus/status.go
Normal file
@@ -0,0 +1,420 @@
|
||||
package jsonstatus
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"os"
|
||||
"sort"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/restic/restic/internal/archiver"
|
||||
"github.com/restic/restic/internal/restic"
|
||||
"github.com/restic/restic/internal/ui"
|
||||
"github.com/restic/restic/internal/ui/termstatus"
|
||||
)
|
||||
|
||||
type counter struct {
|
||||
Files, Dirs, Bytes uint64
|
||||
}
|
||||
|
||||
type fileWorkerMessage struct {
|
||||
filename string
|
||||
done bool
|
||||
}
|
||||
|
||||
// Backup reports progress for the `backup` command in JSON.
|
||||
type Backup struct {
|
||||
*ui.Message
|
||||
*ui.StdioWrapper
|
||||
|
||||
MinUpdatePause time.Duration
|
||||
|
||||
term *termstatus.Terminal
|
||||
v uint
|
||||
start time.Time
|
||||
|
||||
totalBytes uint64
|
||||
|
||||
totalCh chan counter
|
||||
processedCh chan counter
|
||||
errCh chan struct{}
|
||||
workerCh chan fileWorkerMessage
|
||||
finished chan struct{}
|
||||
|
||||
summary struct {
|
||||
sync.Mutex
|
||||
Files, Dirs struct {
|
||||
New uint
|
||||
Changed uint
|
||||
Unchanged uint
|
||||
}
|
||||
archiver.ItemStats
|
||||
}
|
||||
}
|
||||
|
||||
// NewBackup returns a new backup progress reporter.
|
||||
func NewBackup(term *termstatus.Terminal, verbosity uint) *Backup {
|
||||
return &Backup{
|
||||
Message: ui.NewMessage(term, verbosity),
|
||||
StdioWrapper: ui.NewStdioWrapper(term),
|
||||
term: term,
|
||||
v: verbosity,
|
||||
start: time.Now(),
|
||||
|
||||
// limit to 60fps by default
|
||||
MinUpdatePause: time.Second / 60,
|
||||
|
||||
totalCh: make(chan counter),
|
||||
processedCh: make(chan counter),
|
||||
errCh: make(chan struct{}),
|
||||
workerCh: make(chan fileWorkerMessage),
|
||||
finished: make(chan struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// Run regularly updates the status lines. It should be called in a separate
|
||||
// goroutine.
|
||||
func (b *Backup) Run(ctx context.Context) error {
|
||||
var (
|
||||
lastUpdate time.Time
|
||||
total, processed counter
|
||||
errors uint
|
||||
started bool
|
||||
currentFiles = make(map[string]struct{})
|
||||
secondsRemaining uint64
|
||||
)
|
||||
|
||||
t := time.NewTicker(time.Second)
|
||||
defer t.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return nil
|
||||
case <-b.finished:
|
||||
started = false
|
||||
case t, ok := <-b.totalCh:
|
||||
if ok {
|
||||
total = t
|
||||
started = true
|
||||
} else {
|
||||
// scan has finished
|
||||
b.totalCh = nil
|
||||
b.totalBytes = total.Bytes
|
||||
}
|
||||
case s := <-b.processedCh:
|
||||
processed.Files += s.Files
|
||||
processed.Dirs += s.Dirs
|
||||
processed.Bytes += s.Bytes
|
||||
started = true
|
||||
case <-b.errCh:
|
||||
errors++
|
||||
started = true
|
||||
case m := <-b.workerCh:
|
||||
if m.done {
|
||||
delete(currentFiles, m.filename)
|
||||
} else {
|
||||
currentFiles[m.filename] = struct{}{}
|
||||
}
|
||||
case <-t.C:
|
||||
if !started {
|
||||
continue
|
||||
}
|
||||
|
||||
if b.totalCh == nil {
|
||||
secs := float64(time.Since(b.start) / time.Second)
|
||||
todo := float64(total.Bytes - processed.Bytes)
|
||||
secondsRemaining = uint64(secs / float64(processed.Bytes) * todo)
|
||||
}
|
||||
}
|
||||
|
||||
// limit update frequency
|
||||
if time.Since(lastUpdate) < b.MinUpdatePause {
|
||||
continue
|
||||
}
|
||||
lastUpdate = time.Now()
|
||||
|
||||
b.update(total, processed, errors, currentFiles, secondsRemaining)
|
||||
}
|
||||
}
|
||||
|
||||
// update updates the status lines.
|
||||
func (b *Backup) update(total, processed counter, errors uint, currentFiles map[string]struct{}, secs uint64) {
|
||||
status := statusUpdate{
|
||||
MessageType: "status",
|
||||
SecondsElapsed: uint64(time.Since(b.start) / time.Second),
|
||||
SecondsRemaining: secs,
|
||||
TotalFiles: total.Files,
|
||||
FilesDone: processed.Files,
|
||||
TotalBytes: total.Bytes,
|
||||
BytesDone: processed.Bytes,
|
||||
ErrorCount: errors,
|
||||
}
|
||||
|
||||
if total.Bytes > 0 {
|
||||
status.PercentDone = float64(processed.Bytes) / float64(total.Bytes)
|
||||
}
|
||||
|
||||
for filename := range currentFiles {
|
||||
status.CurrentFiles = append(status.CurrentFiles, filename)
|
||||
}
|
||||
sort.Sort(sort.StringSlice(status.CurrentFiles))
|
||||
|
||||
json.NewEncoder(b.StdioWrapper.Stdout()).Encode(status)
|
||||
}
|
||||
|
||||
// ScannerError is the error callback function for the scanner, it prints the
|
||||
// error in verbose mode and returns nil.
|
||||
func (b *Backup) ScannerError(item string, fi os.FileInfo, err error) error {
|
||||
json.NewEncoder(b.StdioWrapper.Stderr()).Encode(errorUpdate{
|
||||
MessageType: "error",
|
||||
Error: err,
|
||||
During: "scan",
|
||||
Item: item,
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
// Error is the error callback function for the archiver, it prints the error and returns nil.
|
||||
func (b *Backup) Error(item string, fi os.FileInfo, err error) error {
|
||||
json.NewEncoder(b.StdioWrapper.Stderr()).Encode(errorUpdate{
|
||||
MessageType: "error",
|
||||
Error: err,
|
||||
During: "archival",
|
||||
Item: item,
|
||||
})
|
||||
b.errCh <- struct{}{}
|
||||
return nil
|
||||
}
|
||||
|
||||
// StartFile is called when a file is being processed by a worker.
|
||||
func (b *Backup) StartFile(filename string) {
|
||||
b.workerCh <- fileWorkerMessage{
|
||||
filename: filename,
|
||||
}
|
||||
}
|
||||
|
||||
// CompleteBlob is called for all saved blobs for files.
|
||||
func (b *Backup) CompleteBlob(filename string, bytes uint64) {
|
||||
b.processedCh <- counter{Bytes: bytes}
|
||||
}
|
||||
|
||||
// CompleteItem is the status callback function for the archiver when a
|
||||
// file/dir has been saved successfully.
|
||||
func (b *Backup) CompleteItem(item string, previous, current *restic.Node, s archiver.ItemStats, d time.Duration) {
|
||||
b.summary.Lock()
|
||||
b.summary.ItemStats.Add(s)
|
||||
b.summary.Unlock()
|
||||
|
||||
if current == nil {
|
||||
// error occurred, tell the status display to remove the line
|
||||
b.workerCh <- fileWorkerMessage{
|
||||
filename: item,
|
||||
done: true,
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
switch current.Type {
|
||||
case "file":
|
||||
b.processedCh <- counter{Files: 1}
|
||||
b.workerCh <- fileWorkerMessage{
|
||||
filename: item,
|
||||
done: true,
|
||||
}
|
||||
case "dir":
|
||||
b.processedCh <- counter{Dirs: 1}
|
||||
}
|
||||
|
||||
if current.Type == "dir" {
|
||||
if previous == nil {
|
||||
if b.v >= 3 {
|
||||
json.NewEncoder(b.StdioWrapper.Stdout()).Encode(verboseUpdate{
|
||||
MessageType: "verbose_status",
|
||||
Action: "new",
|
||||
Item: item,
|
||||
Duration: d.Seconds(),
|
||||
DataSize: s.DataSize,
|
||||
MetadataSize: s.TreeSize,
|
||||
})
|
||||
}
|
||||
b.summary.Lock()
|
||||
b.summary.Dirs.New++
|
||||
b.summary.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
if previous.Equals(*current) {
|
||||
if b.v >= 3 {
|
||||
json.NewEncoder(b.StdioWrapper.Stdout()).Encode(verboseUpdate{
|
||||
MessageType: "verbose_status",
|
||||
Action: "unchanged",
|
||||
Item: item,
|
||||
})
|
||||
}
|
||||
b.summary.Lock()
|
||||
b.summary.Dirs.Unchanged++
|
||||
b.summary.Unlock()
|
||||
} else {
|
||||
if b.v >= 3 {
|
||||
json.NewEncoder(b.StdioWrapper.Stdout()).Encode(verboseUpdate{
|
||||
MessageType: "verbose_status",
|
||||
Action: "modified",
|
||||
Item: item,
|
||||
Duration: d.Seconds(),
|
||||
DataSize: s.DataSize,
|
||||
MetadataSize: s.TreeSize,
|
||||
})
|
||||
}
|
||||
b.summary.Lock()
|
||||
b.summary.Dirs.Changed++
|
||||
b.summary.Unlock()
|
||||
}
|
||||
|
||||
} else if current.Type == "file" {
|
||||
|
||||
b.workerCh <- fileWorkerMessage{
|
||||
done: true,
|
||||
filename: item,
|
||||
}
|
||||
|
||||
if previous == nil {
|
||||
if b.v >= 3 {
|
||||
json.NewEncoder(b.StdioWrapper.Stdout()).Encode(verboseUpdate{
|
||||
MessageType: "verbose_status",
|
||||
Action: "new",
|
||||
Item: item,
|
||||
Duration: d.Seconds(),
|
||||
DataSize: s.DataSize,
|
||||
})
|
||||
}
|
||||
b.summary.Lock()
|
||||
b.summary.Files.New++
|
||||
b.summary.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
if previous.Equals(*current) {
|
||||
if b.v >= 3 {
|
||||
json.NewEncoder(b.StdioWrapper.Stdout()).Encode(verboseUpdate{
|
||||
MessageType: "verbose_status",
|
||||
Action: "unchanged",
|
||||
Item: item,
|
||||
})
|
||||
}
|
||||
b.summary.Lock()
|
||||
b.summary.Files.Unchanged++
|
||||
b.summary.Unlock()
|
||||
} else {
|
||||
if b.v >= 3 {
|
||||
json.NewEncoder(b.StdioWrapper.Stdout()).Encode(verboseUpdate{
|
||||
MessageType: "verbose_status",
|
||||
Action: "modified",
|
||||
Item: item,
|
||||
Duration: d.Seconds(),
|
||||
DataSize: s.DataSize,
|
||||
})
|
||||
}
|
||||
b.summary.Lock()
|
||||
b.summary.Files.Changed++
|
||||
b.summary.Unlock()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ReportTotal sets the total stats up to now
|
||||
func (b *Backup) ReportTotal(item string, s archiver.ScanStats) {
|
||||
select {
|
||||
case b.totalCh <- counter{Files: uint64(s.Files), Dirs: uint64(s.Dirs), Bytes: s.Bytes}:
|
||||
case <-b.finished:
|
||||
}
|
||||
|
||||
if item == "" {
|
||||
if b.v >= 2 {
|
||||
json.NewEncoder(b.StdioWrapper.Stdout()).Encode(verboseUpdate{
|
||||
MessageType: "status",
|
||||
Action: "scan_finished",
|
||||
Duration: time.Since(b.start).Seconds(),
|
||||
DataSize: s.Bytes,
|
||||
TotalFiles: s.Files,
|
||||
})
|
||||
}
|
||||
close(b.totalCh)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Finish prints the finishing messages.
|
||||
func (b *Backup) Finish(snapshotID restic.ID) {
|
||||
close(b.finished)
|
||||
json.NewEncoder(b.StdioWrapper.Stdout()).Encode(summaryOutput{
|
||||
MessageType: "summary",
|
||||
FilesNew: b.summary.Files.New,
|
||||
FilesChanged: b.summary.Files.Changed,
|
||||
FilesUnmodified: b.summary.Files.Unchanged,
|
||||
DirsNew: b.summary.Dirs.New,
|
||||
DirsChanged: b.summary.Dirs.Changed,
|
||||
DirsUnmodified: b.summary.Dirs.Unchanged,
|
||||
DataBlobs: b.summary.ItemStats.DataBlobs,
|
||||
TreeBlobs: b.summary.ItemStats.TreeBlobs,
|
||||
DataAdded: b.summary.ItemStats.DataSize + b.summary.ItemStats.TreeSize,
|
||||
TotalFilesProcessed: b.summary.Files.New + b.summary.Files.Changed + b.summary.Files.Unchanged,
|
||||
TotalBytesProcessed: b.totalBytes,
|
||||
TotalDuration: time.Since(b.start).Seconds(),
|
||||
SnapshotID: snapshotID.Str(),
|
||||
})
|
||||
}
|
||||
|
||||
// SetMinUpdatePause sets b.MinUpdatePause. It satisfies the
|
||||
// ArchiveProgressReporter interface.
|
||||
func (b *Backup) SetMinUpdatePause(d time.Duration) {
|
||||
b.MinUpdatePause = d
|
||||
}
|
||||
|
||||
type statusUpdate struct {
|
||||
MessageType string `json:"message_type"` // "status"
|
||||
SecondsElapsed uint64 `json:"seconds_elapsed,omitempty"`
|
||||
SecondsRemaining uint64 `json:"seconds_remaining,omitempty"`
|
||||
PercentDone float64 `json:"percent_done"`
|
||||
TotalFiles uint64 `json:"total_files,omitempty"`
|
||||
FilesDone uint64 `json:"files_done,omitempty"`
|
||||
TotalBytes uint64 `json:"total_bytes,omitempty"`
|
||||
BytesDone uint64 `json:"bytes_done,omitempty"`
|
||||
ErrorCount uint `json:"error_count,omitempty"`
|
||||
CurrentFiles []string `json:"current_files,omitempty"`
|
||||
}
|
||||
|
||||
type errorUpdate struct {
|
||||
MessageType string `json:"message_type"` // "error"
|
||||
Error error `json:"error"`
|
||||
During string `json:"during"`
|
||||
Item string `json:"item"`
|
||||
}
|
||||
|
||||
type verboseUpdate struct {
|
||||
MessageType string `json:"message_type"` // "verbose_status"
|
||||
Action string `json:"action"`
|
||||
Item string `json:"item"`
|
||||
Duration float64 `json:"duration"` // in seconds
|
||||
DataSize uint64 `json:"data_size"`
|
||||
MetadataSize uint64 `json:"metadata_size"`
|
||||
TotalFiles uint `json:"total_files"`
|
||||
}
|
||||
|
||||
type summaryOutput struct {
|
||||
MessageType string `json:"message_type"` // "summary"
|
||||
FilesNew uint `json:"files_new"`
|
||||
FilesChanged uint `json:"files_changed"`
|
||||
FilesUnmodified uint `json:"files_unmodified"`
|
||||
DirsNew uint `json:"dirs_new"`
|
||||
DirsChanged uint `json:"dirs_changed"`
|
||||
DirsUnmodified uint `json:"dirs_unmodified"`
|
||||
DataBlobs int `json:"data_blobs"`
|
||||
TreeBlobs int `json:"tree_blobs"`
|
||||
DataAdded uint64 `json:"data_added"`
|
||||
TotalFilesProcessed uint `json:"total_files_processed"`
|
||||
TotalBytesProcessed uint64 `json:"total_bytes_processed"`
|
||||
TotalDuration float64 `json:"total_duration"` // in seconds
|
||||
SnapshotID string `json:"snapshot_id"`
|
||||
}
|
||||
30
vendor/cloud.google.com/go/compute/metadata/metadata.go
generated
vendored
30
vendor/cloud.google.com/go/compute/metadata/metadata.go
generated
vendored
@@ -20,6 +20,7 @@
|
||||
package metadata // import "cloud.google.com/go/compute/metadata"
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
@@ -31,9 +32,6 @@ import (
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
"golang.org/x/net/context/ctxhttp"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -139,11 +137,11 @@ func testOnGCE() bool {
|
||||
resc := make(chan bool, 2)
|
||||
|
||||
// Try two strategies in parallel.
|
||||
// See https://github.com/GoogleCloudPlatform/google-cloud-go/issues/194
|
||||
// See https://github.com/googleapis/google-cloud-go/issues/194
|
||||
go func() {
|
||||
req, _ := http.NewRequest("GET", "http://"+metadataIP, nil)
|
||||
req.Header.Set("User-Agent", userAgent)
|
||||
res, err := ctxhttp.Do(ctx, defaultClient.hc, req)
|
||||
res, err := defaultClient.hc.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
resc <- false
|
||||
return
|
||||
@@ -302,8 +300,8 @@ func (c *Client) getETag(suffix string) (value, etag string, err error) {
|
||||
// being stable anyway.
|
||||
host = metadataIP
|
||||
}
|
||||
url := "http://" + host + "/computeMetadata/v1/" + suffix
|
||||
req, _ := http.NewRequest("GET", url, nil)
|
||||
u := "http://" + host + "/computeMetadata/v1/" + suffix
|
||||
req, _ := http.NewRequest("GET", u, nil)
|
||||
req.Header.Set("Metadata-Flavor", "Google")
|
||||
req.Header.Set("User-Agent", userAgent)
|
||||
res, err := c.hc.Do(req)
|
||||
@@ -314,13 +312,13 @@ func (c *Client) getETag(suffix string) (value, etag string, err error) {
|
||||
if res.StatusCode == http.StatusNotFound {
|
||||
return "", "", NotDefinedError(suffix)
|
||||
}
|
||||
if res.StatusCode != 200 {
|
||||
return "", "", fmt.Errorf("status code %d trying to fetch %s", res.StatusCode, url)
|
||||
}
|
||||
all, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
if res.StatusCode != 200 {
|
||||
return "", "", &Error{Code: res.StatusCode, Message: string(all)}
|
||||
}
|
||||
return string(all), res.Header.Get("Etag"), nil
|
||||
}
|
||||
|
||||
@@ -501,3 +499,15 @@ func (c *Client) Subscribe(suffix string, fn func(v string, ok bool) error) erro
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Error contains an error response from the server.
|
||||
type Error struct {
|
||||
// Code is the HTTP response status code.
|
||||
Code int
|
||||
// Message is the server response message.
|
||||
Message string
|
||||
}
|
||||
|
||||
func (e *Error) Error() string {
|
||||
return fmt.Sprintf("compute: Received %d `%s`", e.Code, e.Message)
|
||||
}
|
||||
|
||||
17
vendor/contrib.go.opencensus.io/exporter/ocagent/.travis.yml
generated
vendored
Normal file
17
vendor/contrib.go.opencensus.io/exporter/ocagent/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.11.x
|
||||
|
||||
go_import_path: contrib.go.opencensus.io/exporter/ocagent
|
||||
|
||||
before_script:
|
||||
- GO_FILES=$(find . -iname '*.go' | grep -v /vendor/) # All the .go files, excluding vendor/ if any
|
||||
- PKGS=$(go list ./... | grep -v /vendor/) # All the import paths, excluding vendor/ if any
|
||||
|
||||
script:
|
||||
- go build ./... # Ensure dependency updates don't break build
|
||||
- if [ -n "$(gofmt -s -l $GO_FILES)" ]; then echo "gofmt the following files:"; gofmt -s -l $GO_FILES; exit 1; fi
|
||||
- go vet ./...
|
||||
- go test -v -race $PKGS # Run all the tests with the race detector enabled
|
||||
- 'if [[ $TRAVIS_GO_VERSION = 1.8* ]]; then ! golint ./... | grep -vE "(_mock|_string|\.pb)\.go:"; fi'
|
||||
24
vendor/contrib.go.opencensus.io/exporter/ocagent/CONTRIBUTING.md
generated
vendored
Normal file
24
vendor/contrib.go.opencensus.io/exporter/ocagent/CONTRIBUTING.md
generated
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
# How to contribute
|
||||
|
||||
We'd love to accept your patches and contributions to this project. There are
|
||||
just a few small guidelines you need to follow.
|
||||
|
||||
## Contributor License Agreement
|
||||
|
||||
Contributions to this project must be accompanied by a Contributor License
|
||||
Agreement. You (or your employer) retain the copyright to your contribution,
|
||||
this simply gives us permission to use and redistribute your contributions as
|
||||
part of the project. Head over to <https://cla.developers.google.com/> to see
|
||||
your current agreements on file or to sign a new one.
|
||||
|
||||
You generally only need to submit a CLA once, so if you've already submitted one
|
||||
(even if it was for a different project), you probably don't need to do it
|
||||
again.
|
||||
|
||||
## Code reviews
|
||||
|
||||
All submissions, including submissions by project members, require review. We
|
||||
use GitHub pull requests for this purpose. Consult [GitHub Help] for more
|
||||
information on using pull requests.
|
||||
|
||||
[GitHub Help]: https://help.github.com/articles/about-pull-requests/
|
||||
201
vendor/contrib.go.opencensus.io/exporter/ocagent/LICENSE
generated
vendored
Normal file
201
vendor/contrib.go.opencensus.io/exporter/ocagent/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user