mirror of
https://github.com/restic/restic.git
synced 2026-02-23 09:16:24 +00:00
Compare commits
65 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
04cfb984ae | ||
|
|
02a245941a | ||
|
|
7fb1352aa1 | ||
|
|
4c555bad2e | ||
|
|
75c789bab4 | ||
|
|
626d020e62 | ||
|
|
3830117735 | ||
|
|
042cee8e36 | ||
|
|
03cc5b47e9 | ||
|
|
46fa45942e | ||
|
|
0cb4104aa7 | ||
|
|
f2bbc5fbc4 | ||
|
|
16340ce811 | ||
|
|
2cf8153f4a | ||
|
|
2f00287e45 | ||
|
|
0de17f64e9 | ||
|
|
c30838878f | ||
|
|
bd31281f1e | ||
|
|
7fc54ed98e | ||
|
|
0abdcedcab | ||
|
|
6c05353086 | ||
|
|
e7575bf380 | ||
|
|
89ace85903 | ||
|
|
68a91d66b7 | ||
|
|
724b5bf4fe | ||
|
|
d6da9211bc | ||
|
|
f45abac27f | ||
|
|
00b9a1d87d | ||
|
|
20b835b5a4 | ||
|
|
7bb1a474df | ||
|
|
750ee35dbf | ||
|
|
fda5e1f543 | ||
|
|
78d090aea5 | ||
|
|
7362569cf5 | ||
|
|
f5b1c7e5f1 | ||
|
|
c554cdac4c | ||
|
|
41b624ea1b | ||
|
|
7cdcaadcf5 | ||
|
|
4ad33d3c3b | ||
|
|
2778ac21de | ||
|
|
cb3cd57926 | ||
|
|
ba6815d413 | ||
|
|
52004cdde8 | ||
|
|
1d2045cb61 | ||
|
|
357e2e404a | ||
|
|
38e5640cda | ||
|
|
c4c731bd9a | ||
|
|
04d27acd60 | ||
|
|
80f0303b21 | ||
|
|
d651d9b427 | ||
|
|
3b2648bd5e | ||
|
|
73cc11f000 | ||
|
|
637de0149c | ||
|
|
855575e5a7 | ||
|
|
ed2999a163 | ||
|
|
a18c16e19e | ||
|
|
9032ab2eec | ||
|
|
03f66b8d74 | ||
|
|
8c30ae7c65 | ||
|
|
453c9c9199 | ||
|
|
993e370f92 | ||
|
|
2bcd3a3acc | ||
|
|
c54c632ca1 | ||
|
|
28a4a35625 | ||
|
|
e7577d7bb4 |
55
CHANGELOG.md
55
CHANGELOG.md
@@ -1,7 +1,60 @@
|
||||
This file describes changes relevant to all users that are made in each
|
||||
released version of restic from the perspective of the user.
|
||||
|
||||
Important Changes in 0.X.Y
|
||||
Important Changes in 0.7.1
|
||||
==========================
|
||||
|
||||
* The `migrate` command for chaning the `s3legacy` layout to the `default`
|
||||
layout for s3 backends has been improved: It can now be restarted with
|
||||
`restic migrate --force s3_layout` and automatically retries operations on
|
||||
error.
|
||||
https://github.com/restic/restic/issues/1073
|
||||
https://github.com/restic/restic/pull/1075
|
||||
|
||||
Small changes
|
||||
-------------
|
||||
|
||||
* The local and sftp backends now create the subdirs below `data/` on
|
||||
open/init. This way, restic makes sure that they always exist. This is
|
||||
connected to an issue for the sftp server:
|
||||
https://github.com/restic/rest-server/pull/11#issuecomment-309879710
|
||||
https://github.com/restic/restic/issues/1055
|
||||
https://github.com/restic/restic/pull/1077
|
||||
https://github.com/restic/restic/pull/1105
|
||||
|
||||
* When no S3 credentials are specified in the environment variables, restic
|
||||
now tries to load credentials from an IAM instance profile when the s3
|
||||
backend is used.
|
||||
https://github.com/restic/restic/issues/1067
|
||||
https://github.com/restic/restic/pull/1086
|
||||
|
||||
* On Darwin and FreeBSD, restic now prints stats when SIGINFO is received
|
||||
(usually when ctrl+t is pressed).
|
||||
https://github.com/restic/restic/pull/1082
|
||||
|
||||
* The dependencies have been updated.
|
||||
https://github.com/restic/restic/pull/1108
|
||||
https://github.com/restic/restic/pull/1124
|
||||
|
||||
* A bug was found (and corrected) in the index rebuilding after prune, which
|
||||
led to indexes which include blobs that were not present in the repo any
|
||||
more. There were already checks in place which detected this situation and
|
||||
aborted with an error message. A new run of either `prune` or
|
||||
`rebuild-index` corrected the index files. This is now fixed and a test has
|
||||
been added to detect this.
|
||||
https://github.com/restic/restic/pull/1115
|
||||
|
||||
* Errors for chmod() on Unix for filesystems which do not support it (e.g. smb
|
||||
mounted via gvfs) are now ignored.
|
||||
https://github.com/restic/restic/pull/1080
|
||||
https://github.com/restic/restic/pull/1112
|
||||
|
||||
* The semantic for the `--tags` option to `forget` and `snapshots` was
|
||||
clarified:
|
||||
https://github.com/restic/restic/issues/1081
|
||||
https://github.com/restic/restic/pull/1090
|
||||
|
||||
Important Changes in 0.7.0
|
||||
==========================
|
||||
|
||||
* New "swift" backend: A new backend for the OpenStack Swift cloud storage
|
||||
|
||||
@@ -95,7 +95,7 @@ complete text in ``LICENSE``.
|
||||
:target: https://travis-ci.org/restic/restic
|
||||
.. |Build status| image:: https://ci.appveyor.com/api/projects/status/nuy4lfbgfbytw92q/branch/master?svg=true
|
||||
:target: https://ci.appveyor.com/project/fd0/restic/branch/master
|
||||
.. |Report Card| image:: http://goreportcard.com/badge/github.com/restic/restic
|
||||
:target: http://goreportcard.com/report/github.com/restic/restic
|
||||
.. |Report Card| image:: https://goreportcard.com/badge/github.com/restic/restic
|
||||
:target: https://goreportcard.com/report/github.com/restic/restic
|
||||
.. |Say Thanks| image:: https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg
|
||||
:target: https://saythanks.io/to/restic
|
||||
|
||||
@@ -17,8 +17,8 @@ init:
|
||||
|
||||
install:
|
||||
- rmdir c:\go /s /q
|
||||
- appveyor DownloadFile https://storage.googleapis.com/golang/go1.8.1.windows-amd64.msi
|
||||
- msiexec /i go1.8.1.windows-amd64.msi /q
|
||||
- appveyor DownloadFile https://storage.googleapis.com/golang/go1.8.3.windows-amd64.msi
|
||||
- msiexec /i go1.8.3.windows-amd64.msi /q
|
||||
- go version
|
||||
- go env
|
||||
- appveyor DownloadFile http://sourceforge.netcologne.de/project/gnuwin32/tar/1.13-1/tar-1.13-1-bin.zip -FileName tar.zip
|
||||
|
||||
5
build.go
5
build.go
@@ -401,7 +401,10 @@ func main() {
|
||||
if err != nil {
|
||||
die("Getwd() returned %v\n", err)
|
||||
}
|
||||
output := filepath.Join(cwd, outputFilename)
|
||||
output := outputFilename
|
||||
if !filepath.IsAbs(output) {
|
||||
output = filepath.Join(cwd, output)
|
||||
}
|
||||
|
||||
version := getVersion()
|
||||
constants := Constants{}
|
||||
|
||||
@@ -526,18 +526,20 @@ specified with ``--stdin-filename``, e.g. like this:
|
||||
|
||||
$ mysqldump [...] | restic -r /tmp/backup backup --stdin --stdin-filename production.sql
|
||||
|
||||
Tags
|
||||
~~~~
|
||||
Tags for backup
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Snapshots can have one or more tags, short strings which add identifying
|
||||
information. Just specify the tags for a snapshot with ``--tag``:
|
||||
information. Just specify the tags for a snapshot one by one with ``--tag``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -r /tmp/backup backup --tag projectX ~/shared/work/web
|
||||
$ restic -r /tmp/backup backup --tag projectX -tag foo --tag bar ~/shared/work/web
|
||||
[...]
|
||||
|
||||
The tags can later be used to keep (or forget) snapshots.
|
||||
The tags can later be used to keep (or forget) snapshots with the ``forget``
|
||||
command. The command ``tag`` can be used to modify tags on an existing
|
||||
snapshot.
|
||||
|
||||
List all snapshots
|
||||
------------------
|
||||
@@ -644,7 +646,7 @@ command does that:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic -r /tmp/backup tag --set NL,CH 590c8fc8
|
||||
$ restic -r /tmp/backup tag --set NL --set CH 590c8fc8
|
||||
Create exclusive lock for repository
|
||||
Modified tags on 1 snapshots
|
||||
|
||||
@@ -872,7 +874,26 @@ The ``forget`` command accepts the following parameters:
|
||||
Additionally, you can restrict removing snapshots to those which have a
|
||||
particular hostname with the ``--hostname`` parameter, or tags with the
|
||||
``--tag`` option. When multiple tags are specified, only the snapshots
|
||||
which have all the tags are considered.
|
||||
which have all the tags are considered. For example, the following command
|
||||
removes all but the latest snapshot of all snapshots that have the tag ``foo``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic forget --tag foo --keep-last 1
|
||||
|
||||
This command removes all but the last snapshot of all snapshots that have
|
||||
either the ``foo`` or ``bar`` tag set:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic forget --tag foo --tag bar --keep-last 1
|
||||
|
||||
To only keep the last snapshot of all snapshots with both the tag ``foo`` and
|
||||
``bar`` set use:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ restic forget --tag foo,tag bar --keep-last 1
|
||||
|
||||
All the ``--keep-*`` options above only count
|
||||
hours/days/weeks/months/years which have a snapshot, so those without a
|
||||
@@ -986,7 +1007,7 @@ Under the hood: Browse repository objects
|
||||
|
||||
Internally, a repository stores data of several different types
|
||||
described in the `design
|
||||
documentation <https://github.com/restic/restic/blob/master/doc/Design.md>`__.
|
||||
documentation <https://github.com/restic/restic/blob/master/doc/Design.rst>`__.
|
||||
You can ``list`` objects such as blobs, packs, index, snapshots, keys or
|
||||
locks with the following command:
|
||||
|
||||
|
||||
@@ -68,12 +68,12 @@ func init() {
|
||||
f := cmdBackup.Flags()
|
||||
f.StringVar(&backupOptions.Parent, "parent", "", "use this parent snapshot (default: last snapshot in the repo that has the same target files/directories)")
|
||||
f.BoolVarP(&backupOptions.Force, "force", "f", false, `force re-reading the target files/directories (overrides the "parent" flag)`)
|
||||
f.StringSliceVarP(&backupOptions.Excludes, "exclude", "e", nil, "exclude a `pattern` (can be specified multiple times)")
|
||||
f.StringSliceVar(&backupOptions.ExcludeFiles, "exclude-file", nil, "read exclude patterns from a `file` (can be specified multiple times)")
|
||||
f.StringArrayVarP(&backupOptions.Excludes, "exclude", "e", nil, "exclude a `pattern` (can be specified multiple times)")
|
||||
f.StringArrayVar(&backupOptions.ExcludeFiles, "exclude-file", nil, "read exclude patterns from a `file` (can be specified multiple times)")
|
||||
f.BoolVarP(&backupOptions.ExcludeOtherFS, "one-file-system", "x", false, "exclude other file systems")
|
||||
f.BoolVar(&backupOptions.Stdin, "stdin", false, "read backup from stdin")
|
||||
f.StringVar(&backupOptions.StdinFilename, "stdin-filename", "stdin", "file name to use when reading from stdin")
|
||||
f.StringSliceVar(&backupOptions.Tags, "tag", nil, "add a `tag` for the new snapshot (can be specified multiple times)")
|
||||
f.StringArrayVar(&backupOptions.Tags, "tag", nil, "add a `tag` for the new snapshot (can be specified multiple times)")
|
||||
f.StringVar(&backupOptions.Hostname, "hostname", hostname, "set the `hostname` for the snapshot manually")
|
||||
f.StringVar(&backupOptions.FilesFrom, "files-from", "", "read the files to backup from file (can be combined with file args)")
|
||||
}
|
||||
@@ -392,7 +392,7 @@ func runBackup(opts BackupOptions, gopts GlobalOptions, args []string) error {
|
||||
|
||||
// Find last snapshot to set it as parent, if not already set
|
||||
if !opts.Force && parentSnapshotID == nil {
|
||||
id, err := restic.FindLatestSnapshot(context.TODO(), repo, target, opts.Tags, opts.Hostname)
|
||||
id, err := restic.FindLatestSnapshot(context.TODO(), repo, target, []restic.TagList{opts.Tags}, opts.Hostname)
|
||||
if err == nil {
|
||||
parentSnapshotID = &id
|
||||
} else if err != restic.ErrNoSnapshotFound {
|
||||
|
||||
@@ -34,7 +34,7 @@ type FindOptions struct {
|
||||
ListLong bool
|
||||
Host string
|
||||
Paths []string
|
||||
Tags []string
|
||||
Tags restic.TagLists
|
||||
}
|
||||
|
||||
var findOptions FindOptions
|
||||
@@ -45,13 +45,13 @@ func init() {
|
||||
f := cmdFind.Flags()
|
||||
f.StringVarP(&findOptions.Oldest, "oldest", "O", "", "oldest modification date/time")
|
||||
f.StringVarP(&findOptions.Newest, "newest", "N", "", "newest modification date/time")
|
||||
f.StringSliceVarP(&findOptions.Snapshots, "snapshot", "s", nil, "snapshot `id` to search in (can be given multiple times)")
|
||||
f.StringArrayVarP(&findOptions.Snapshots, "snapshot", "s", nil, "snapshot `id` to search in (can be given multiple times)")
|
||||
f.BoolVarP(&findOptions.CaseInsensitive, "ignore-case", "i", false, "ignore case for pattern")
|
||||
f.BoolVarP(&findOptions.ListLong, "long", "l", false, "use a long listing format showing size and mode")
|
||||
|
||||
f.StringVarP(&findOptions.Host, "host", "H", "", "only consider snapshots for this `host`, when no snapshot ID is given")
|
||||
f.StringSliceVar(&findOptions.Tags, "tag", nil, "only consider snapshots which include this `tag`, when no snapshot-ID is given")
|
||||
f.StringSliceVar(&findOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when no snapshot-ID is given")
|
||||
f.Var(&findOptions.Tags, "tag", "only consider snapshots which include this `taglist`, when no snapshot-ID is given")
|
||||
f.StringArrayVar(&findOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when no snapshot-ID is given")
|
||||
}
|
||||
|
||||
type findPattern struct {
|
||||
|
||||
@@ -31,10 +31,10 @@ type ForgetOptions struct {
|
||||
Weekly int
|
||||
Monthly int
|
||||
Yearly int
|
||||
KeepTags []string
|
||||
KeepTags restic.TagLists
|
||||
|
||||
Host string
|
||||
Tags []string
|
||||
Tags restic.TagLists
|
||||
Paths []string
|
||||
|
||||
GroupByTags bool
|
||||
@@ -55,14 +55,14 @@ func init() {
|
||||
f.IntVarP(&forgetOptions.Monthly, "keep-monthly", "m", 0, "keep the last `n` monthly snapshots")
|
||||
f.IntVarP(&forgetOptions.Yearly, "keep-yearly", "y", 0, "keep the last `n` yearly snapshots")
|
||||
|
||||
f.StringSliceVar(&forgetOptions.KeepTags, "keep-tag", []string{}, "keep snapshots with this `tag` (can be specified multiple times)")
|
||||
f.Var(&forgetOptions.KeepTags, "keep-tag", "keep snapshots with this `taglist` (can be specified multiple times)")
|
||||
f.BoolVarP(&forgetOptions.GroupByTags, "group-by-tags", "G", false, "Group by host,paths,tags instead of just host,paths")
|
||||
// Sadly the commonly used shortcut `H` is already used.
|
||||
f.StringVar(&forgetOptions.Host, "host", "", "only consider snapshots with the given `host`")
|
||||
// Deprecated since 2017-03-07.
|
||||
f.StringVar(&forgetOptions.Host, "hostname", "", "only consider snapshots with the given `hostname` (deprecated)")
|
||||
f.StringSliceVar(&forgetOptions.Tags, "tag", nil, "only consider snapshots which include this `tag` (can be specified multiple times)")
|
||||
f.StringSliceVar(&forgetOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path` (can be specified multiple times)")
|
||||
f.Var(&forgetOptions.Tags, "tag", "only consider snapshots which include this `taglist` in the format `tag[,tag,...]` (can be specified multiple times)")
|
||||
f.StringArrayVar(&forgetOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path` (can be specified multiple times)")
|
||||
|
||||
f.BoolVarP(&forgetOptions.DryRun, "dry-run", "n", false, "do not delete anything, just print what would be done")
|
||||
f.BoolVar(&forgetOptions.Prune, "prune", false, "automatically run the 'prune' command if snapshots have been removed")
|
||||
|
||||
@@ -28,7 +28,7 @@ The special snapshot-ID "latest" can be used to list files and directories of th
|
||||
type LsOptions struct {
|
||||
ListLong bool
|
||||
Host string
|
||||
Tags []string
|
||||
Tags restic.TagLists
|
||||
Paths []string
|
||||
}
|
||||
|
||||
@@ -41,8 +41,8 @@ func init() {
|
||||
flags.BoolVarP(&lsOptions.ListLong, "long", "l", false, "use a long listing format showing size and mode")
|
||||
|
||||
flags.StringVarP(&lsOptions.Host, "host", "H", "", "only consider snapshots for this `host`, when no snapshot ID is given")
|
||||
flags.StringSliceVar(&lsOptions.Tags, "tag", nil, "only consider snapshots which include this `tag`, when no snapshot ID is given")
|
||||
flags.StringSliceVar(&lsOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when no snapshot ID is given")
|
||||
flags.Var(&lsOptions.Tags, "tag", "only consider snapshots which include this `taglist`, when no snapshot ID is given")
|
||||
flags.StringArrayVar(&lsOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when no snapshot ID is given")
|
||||
}
|
||||
|
||||
func printTree(repo *repository.Repository, id *restic.ID, prefix string) error {
|
||||
|
||||
@@ -21,12 +21,15 @@ name is explicitely given, a list of migrations that can be applied is printed.
|
||||
|
||||
// MigrateOptions bundles all options for the 'check' command.
|
||||
type MigrateOptions struct {
|
||||
Force bool
|
||||
}
|
||||
|
||||
var migrateOptions MigrateOptions
|
||||
|
||||
func init() {
|
||||
cmdRoot.AddCommand(cmdMigrate)
|
||||
f := cmdMigrate.Flags()
|
||||
f.BoolVarP(&migrateOptions.Force, "force", "f", false, `apply a migration a second time`)
|
||||
}
|
||||
|
||||
func checkMigrations(opts MigrateOptions, gopts GlobalOptions, repo restic.Repository) error {
|
||||
@@ -59,8 +62,12 @@ func applyMigrations(opts MigrateOptions, gopts GlobalOptions, repo restic.Repos
|
||||
}
|
||||
|
||||
if !ok {
|
||||
Warnf("migration %v cannot be applied: check failed\n", m.Name())
|
||||
continue
|
||||
if !opts.Force {
|
||||
Warnf("migration %v cannot be applied: check failed\nIf you want to apply this migration anyway, re-run with option --force\n", m.Name())
|
||||
continue
|
||||
}
|
||||
|
||||
Warnf("check for migration %v failed, continuing anyway\n", m.Name())
|
||||
}
|
||||
|
||||
Printf("applying migration %v...\n", m.Name())
|
||||
|
||||
@@ -6,6 +6,7 @@ package main
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"restic"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
@@ -37,7 +38,7 @@ type MountOptions struct {
|
||||
AllowRoot bool
|
||||
AllowOther bool
|
||||
Host string
|
||||
Tags []string
|
||||
Tags restic.TagLists
|
||||
Paths []string
|
||||
}
|
||||
|
||||
@@ -52,8 +53,8 @@ func init() {
|
||||
mountFlags.BoolVar(&mountOptions.AllowOther, "allow-other", false, "allow other users to access the data in the mounted directory")
|
||||
|
||||
mountFlags.StringVarP(&mountOptions.Host, "host", "H", "", `only consider snapshots for this host`)
|
||||
mountFlags.StringSliceVar(&mountOptions.Tags, "tag", nil, "only consider snapshots which include this `tag`")
|
||||
mountFlags.StringSliceVar(&mountOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`")
|
||||
mountFlags.Var(&mountOptions.Tags, "tag", "only consider snapshots which include this `taglist`")
|
||||
mountFlags.StringArrayVar(&mountOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`")
|
||||
}
|
||||
|
||||
func mount(opts MountOptions, gopts GlobalOptions, mountpoint string) error {
|
||||
|
||||
@@ -235,22 +235,23 @@ func pruneRepository(gopts GlobalOptions, repo restic.Repository) error {
|
||||
Verbosef("will delete %d packs and rewrite %d packs, this frees %s\n",
|
||||
len(removePacks), len(rewritePacks), formatBytes(uint64(removeBytes)))
|
||||
|
||||
var repackedBlobs restic.IDSet
|
||||
var obsoletePacks restic.IDSet
|
||||
if len(rewritePacks) != 0 {
|
||||
bar = newProgressMax(!gopts.Quiet, uint64(len(rewritePacks)), "packs rewritten")
|
||||
bar.Start()
|
||||
repackedBlobs, err = repository.Repack(ctx, repo, rewritePacks, usedBlobs, bar)
|
||||
obsoletePacks, err = repository.Repack(ctx, repo, rewritePacks, usedBlobs, bar)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
bar.Done()
|
||||
}
|
||||
|
||||
removePacks.Merge(obsoletePacks)
|
||||
|
||||
if err = rebuildIndex(ctx, repo, removePacks); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
removePacks.Merge(repackedBlobs)
|
||||
if len(removePacks) != 0 {
|
||||
bar = newProgressMax(!gopts.Quiet, uint64(len(removePacks)), "packs deleted")
|
||||
bar.Start()
|
||||
|
||||
@@ -31,7 +31,7 @@ type RestoreOptions struct {
|
||||
Target string
|
||||
Host string
|
||||
Paths []string
|
||||
Tags []string
|
||||
Tags restic.TagLists
|
||||
}
|
||||
|
||||
var restoreOptions RestoreOptions
|
||||
@@ -40,13 +40,13 @@ func init() {
|
||||
cmdRoot.AddCommand(cmdRestore)
|
||||
|
||||
flags := cmdRestore.Flags()
|
||||
flags.StringSliceVarP(&restoreOptions.Exclude, "exclude", "e", nil, "exclude a `pattern` (can be specified multiple times)")
|
||||
flags.StringSliceVarP(&restoreOptions.Include, "include", "i", nil, "include a `pattern`, exclude everything else (can be specified multiple times)")
|
||||
flags.StringArrayVarP(&restoreOptions.Exclude, "exclude", "e", nil, "exclude a `pattern` (can be specified multiple times)")
|
||||
flags.StringArrayVarP(&restoreOptions.Include, "include", "i", nil, "include a `pattern`, exclude everything else (can be specified multiple times)")
|
||||
flags.StringVarP(&restoreOptions.Target, "target", "t", "", "directory to extract data to")
|
||||
|
||||
flags.StringVarP(&restoreOptions.Host, "host", "H", "", `only consider snapshots for this host when the snapshot ID is "latest"`)
|
||||
flags.StringSliceVar(&restoreOptions.Tags, "tag", nil, "only consider snapshots which include this `tag` for snapshot ID \"latest\"")
|
||||
flags.StringSliceVar(&restoreOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path` for snapshot ID \"latest\"")
|
||||
flags.Var(&restoreOptions.Tags, "tag", "only consider snapshots which include this `taglist` for snapshot ID \"latest\"")
|
||||
flags.StringArrayVar(&restoreOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path` for snapshot ID \"latest\"")
|
||||
}
|
||||
|
||||
func runRestore(opts RestoreOptions, gopts GlobalOptions, args []string) error {
|
||||
|
||||
@@ -26,7 +26,7 @@ The "snapshots" command lists all snapshots stored in the repository.
|
||||
// SnapshotOptions bundles all options for the snapshots command.
|
||||
type SnapshotOptions struct {
|
||||
Host string
|
||||
Tags []string
|
||||
Tags restic.TagLists
|
||||
Paths []string
|
||||
}
|
||||
|
||||
@@ -37,8 +37,8 @@ func init() {
|
||||
|
||||
f := cmdSnapshots.Flags()
|
||||
f.StringVarP(&snapshotOptions.Host, "host", "H", "", "only consider snapshots for this `host`")
|
||||
f.StringSliceVar(&snapshotOptions.Tags, "tag", nil, "only consider snapshots which include this `tag` (can be specified multiple times)")
|
||||
f.StringSliceVar(&snapshotOptions.Paths, "path", nil, "only consider snapshots for this `path` (can be specified multiple times)")
|
||||
f.Var(&snapshotOptions.Tags, "tag", "only consider snapshots which include this `taglist` (can be specified multiple times)")
|
||||
f.StringArrayVar(&snapshotOptions.Paths, "path", nil, "only consider snapshots for this `path` (can be specified multiple times)")
|
||||
}
|
||||
|
||||
func runSnapshots(opts SnapshotOptions, gopts GlobalOptions, args []string) error {
|
||||
|
||||
@@ -31,7 +31,7 @@ When no snapshot-ID is given, all snapshots matching the host, tag and path filt
|
||||
type TagOptions struct {
|
||||
Host string
|
||||
Paths []string
|
||||
Tags []string
|
||||
Tags restic.TagLists
|
||||
SetTags []string
|
||||
AddTags []string
|
||||
RemoveTags []string
|
||||
@@ -48,8 +48,8 @@ func init() {
|
||||
tagFlags.StringSliceVar(&tagOptions.RemoveTags, "remove", nil, "`tag` which will be removed from the existing tags (can be given multiple times)")
|
||||
|
||||
tagFlags.StringVarP(&tagOptions.Host, "host", "H", "", "only consider snapshots for this `host`, when no snapshot ID is given")
|
||||
tagFlags.StringSliceVar(&tagOptions.Tags, "tag", nil, "only consider snapshots which include this `tag`, when no snapshot-ID is given")
|
||||
tagFlags.StringSliceVar(&tagOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when no snapshot-ID is given")
|
||||
tagFlags.Var(&tagOptions.Tags, "tag", "only consider snapshots which include this `taglist`, when no snapshot-ID is given")
|
||||
tagFlags.StringArrayVar(&tagOptions.Paths, "path", nil, "only consider snapshots which include this (absolute) `path`, when no snapshot-ID is given")
|
||||
}
|
||||
|
||||
func changeTags(repo *repository.Repository, sn *restic.Snapshot, setTags, addTags, removeTags []string) (bool, error) {
|
||||
|
||||
@@ -8,7 +8,7 @@ import (
|
||||
)
|
||||
|
||||
// FindFilteredSnapshots yields Snapshots, either given explicitly by `snapshotIDs` or filtered from the list of all snapshots.
|
||||
func FindFilteredSnapshots(ctx context.Context, repo *repository.Repository, host string, tags []string, paths []string, snapshotIDs []string) <-chan *restic.Snapshot {
|
||||
func FindFilteredSnapshots(ctx context.Context, repo *repository.Repository, host string, tags []restic.TagList, paths []string, snapshotIDs []string) <-chan *restic.Snapshot {
|
||||
out := make(chan *restic.Snapshot)
|
||||
go func() {
|
||||
defer close(out)
|
||||
|
||||
@@ -67,7 +67,7 @@ func init() {
|
||||
|
||||
f := cmdRoot.PersistentFlags()
|
||||
f.StringVarP(&globalOptions.Repo, "repo", "r", os.Getenv("RESTIC_REPOSITORY"), "repository to backup to or restore from (default: $RESTIC_REPOSITORY)")
|
||||
f.StringVarP(&globalOptions.PasswordFile, "password-file", "p", "", "read the repository password from a file")
|
||||
f.StringVarP(&globalOptions.PasswordFile, "password-file", "p", os.Getenv("RESTIC_PASSWORD_FILE"), "read the repository password from a file (default: $RESTIC_PASSWORD_FILE)")
|
||||
f.BoolVarP(&globalOptions.Quiet, "quiet", "q", false, "do not output comprehensive progress report")
|
||||
f.BoolVar(&globalOptions.NoLock, "no-lock", false, "do not lock the repo, this allows some operations on read-only repos")
|
||||
f.BoolVarP(&globalOptions.JSON, "json", "", false, "set output mode to JSON for commands that support it")
|
||||
|
||||
@@ -1175,15 +1175,19 @@ func TestPrune(t *testing.T) {
|
||||
SetupTarTestFixture(t, env.testdata, datafile)
|
||||
opts := BackupOptions{}
|
||||
|
||||
testRunBackup(t, []string{filepath.Join(env.testdata, "0", "0", "1")}, opts, gopts)
|
||||
testRunBackup(t, []string{filepath.Join(env.testdata, "0", "0")}, opts, gopts)
|
||||
firstSnapshot := testRunList(t, "snapshots", gopts)
|
||||
Assert(t, len(firstSnapshot) == 1,
|
||||
"expected one snapshot, got %v", firstSnapshot)
|
||||
|
||||
testRunBackup(t, []string{filepath.Join(env.testdata, "0", "0", "2")}, opts, gopts)
|
||||
testRunBackup(t, []string{filepath.Join(env.testdata, "0", "0", "3")}, opts, gopts)
|
||||
|
||||
snapshotIDs := testRunList(t, "snapshots", gopts)
|
||||
Assert(t, len(snapshotIDs) == 3,
|
||||
"expected one snapshot, got %v", snapshotIDs)
|
||||
"expected 3 snapshot, got %v", snapshotIDs)
|
||||
|
||||
testRunForget(t, gopts, snapshotIDs[0].String())
|
||||
testRunForget(t, gopts, firstSnapshot[0].String())
|
||||
testRunPrune(t, gopts)
|
||||
testRunCheck(t, gopts)
|
||||
})
|
||||
|
||||
BIN
src/cmds/restic/testdata/small-repo.tar.gz
vendored
BIN
src/cmds/restic/testdata/small-repo.tar.gz
vendored
Binary file not shown.
@@ -1,6 +1,9 @@
|
||||
package backend
|
||||
|
||||
import "restic"
|
||||
import (
|
||||
"encoding/hex"
|
||||
"restic"
|
||||
)
|
||||
|
||||
// DefaultLayout implements the default layout for local and sftp backends, as
|
||||
// described in the Design document. The `data` directory has one level of
|
||||
@@ -49,11 +52,18 @@ func (l *DefaultLayout) Filename(h restic.Handle) string {
|
||||
return l.Join(l.Dirname(h), name)
|
||||
}
|
||||
|
||||
// Paths returns all directory names
|
||||
// Paths returns all directory names needed for a repo.
|
||||
func (l *DefaultLayout) Paths() (dirs []string) {
|
||||
for _, p := range defaultLayoutPaths {
|
||||
dirs = append(dirs, l.Join(l.Path, p))
|
||||
}
|
||||
|
||||
// also add subdirs
|
||||
for i := 0; i < 256; i++ {
|
||||
subdir := hex.EncodeToString([]byte{byte(i)})
|
||||
dirs = append(dirs, l.Join(l.Path, defaultLayoutPaths[restic.DataFile], subdir))
|
||||
}
|
||||
|
||||
return dirs
|
||||
}
|
||||
|
||||
|
||||
@@ -111,6 +111,10 @@ func TestDefaultLayout(t *testing.T) {
|
||||
filepath.Join(tempdir, "keys"),
|
||||
}
|
||||
|
||||
for i := 0; i < 256; i++ {
|
||||
want = append(want, filepath.Join(tempdir, "data", fmt.Sprintf("%02x", i)))
|
||||
}
|
||||
|
||||
sort.Sort(sort.StringSlice(want))
|
||||
sort.Sort(sort.StringSlice(dirs))
|
||||
|
||||
|
||||
@@ -35,6 +35,14 @@ func Open(cfg Config) (*Local, error) {
|
||||
|
||||
be := &Local{Config: cfg, Layout: l}
|
||||
|
||||
// create paths for data and refs. MkdirAll does nothing if the directory already exists.
|
||||
for _, d := range be.Paths() {
|
||||
err := fs.MkdirAll(d, backend.Modes.Dir)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "MkdirAll")
|
||||
}
|
||||
}
|
||||
|
||||
return be, nil
|
||||
}
|
||||
|
||||
@@ -89,26 +97,8 @@ func (b *Local) Save(ctx context.Context, h restic.Handle, rd io.Reader) (err er
|
||||
|
||||
filename := b.Filename(h)
|
||||
|
||||
// create directories if necessary, ignore errors
|
||||
if h.Type == restic.DataFile {
|
||||
err = fs.MkdirAll(filepath.Dir(filename), backend.Modes.Dir)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "MkdirAll")
|
||||
}
|
||||
}
|
||||
|
||||
// create new file
|
||||
f, err := fs.OpenFile(filename, os.O_CREATE|os.O_EXCL|os.O_WRONLY, backend.Modes.File)
|
||||
if os.IsNotExist(errors.Cause(err)) {
|
||||
// create the locks dir, then try again
|
||||
err = fs.MkdirAll(b.Dirname(h), backend.Modes.Dir)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "MkdirAll")
|
||||
}
|
||||
|
||||
return b.Save(ctx, h, rd)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "OpenFile")
|
||||
}
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"restic"
|
||||
@@ -14,6 +15,7 @@ import (
|
||||
"restic/errors"
|
||||
|
||||
"github.com/minio/minio-go"
|
||||
"github.com/minio/minio-go/pkg/credentials"
|
||||
|
||||
"restic/debug"
|
||||
)
|
||||
@@ -38,9 +40,24 @@ func open(cfg Config) (*Backend, error) {
|
||||
minio.MaxRetry = int(cfg.MaxRetries)
|
||||
}
|
||||
|
||||
client, err := minio.New(cfg.Endpoint, cfg.KeyID, cfg.Secret, !cfg.UseHTTP)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "minio.New")
|
||||
var client *minio.Client
|
||||
var err error
|
||||
|
||||
if cfg.KeyID == "" || cfg.Secret == "" {
|
||||
debug.Log("key/secret not found, trying to get them from IAM")
|
||||
creds := credentials.NewIAM("")
|
||||
client, err = minio.NewWithCredentials(cfg.Endpoint, creds, !cfg.UseHTTP, "")
|
||||
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "minio.NewWithCredentials")
|
||||
}
|
||||
} else {
|
||||
debug.Log("key/secret found")
|
||||
client, err = minio.New(cfg.Endpoint, cfg.KeyID, cfg.Secret, !cfg.UseHTTP)
|
||||
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "minio.New")
|
||||
}
|
||||
}
|
||||
|
||||
sem, err := backend.NewSemaphore(cfg.Connections)
|
||||
@@ -76,6 +93,9 @@ func Open(cfg Config) (restic.Backend, error) {
|
||||
// it does not exist yet.
|
||||
func Create(cfg Config) (restic.Backend, error) {
|
||||
be, err := open(cfg)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "open")
|
||||
}
|
||||
found, err := be.client.BucketExists(cfg.Bucket)
|
||||
if err != nil {
|
||||
debug.Log("BucketExists(%v) returned err %v", cfg.Bucket, err)
|
||||
@@ -178,53 +198,6 @@ func (be *Backend) Path() string {
|
||||
return be.cfg.Prefix
|
||||
}
|
||||
|
||||
// nopCloserFile wraps *os.File and overwrites the Close() method with method
|
||||
// that does nothing. In addition, the method Len() is implemented, which
|
||||
// returns the size of the file (filesize - current offset).
|
||||
type nopCloserFile struct {
|
||||
*os.File
|
||||
}
|
||||
|
||||
func (f nopCloserFile) Close() error {
|
||||
debug.Log("prevented Close()")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Len returns the remaining length of the file (filesize - current offset).
|
||||
func (f nopCloserFile) Len() int {
|
||||
debug.Log("Len() called")
|
||||
fi, err := f.Stat()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
pos, err := f.Seek(0, io.SeekCurrent)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
size := fi.Size() - pos
|
||||
debug.Log("returning file size %v", size)
|
||||
return int(size)
|
||||
}
|
||||
|
||||
type lenner interface {
|
||||
Len() int
|
||||
io.Reader
|
||||
}
|
||||
|
||||
// nopCloserLenner wraps a lenner and overwrites the Close() method with method
|
||||
// that does nothing. In addition, the method Size() is implemented, which
|
||||
// returns the size of the file (filesize - current offset).
|
||||
type nopCloserLenner struct {
|
||||
lenner
|
||||
}
|
||||
|
||||
func (f *nopCloserLenner) Close() error {
|
||||
debug.Log("prevented Close()")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Save stores data in the backend at the handle.
|
||||
func (be *Backend) Save(ctx context.Context, h restic.Handle, rd io.Reader) (err error) {
|
||||
debug.Log("Save %v", h)
|
||||
@@ -243,15 +216,7 @@ func (be *Backend) Save(ctx context.Context, h restic.Handle, rd io.Reader) (err
|
||||
}
|
||||
|
||||
// prevent the HTTP client from closing a file
|
||||
if f, ok := rd.(*os.File); ok {
|
||||
debug.Log("reader is %#T, using nopCloserFile{}", rd)
|
||||
rd = nopCloserFile{f}
|
||||
} else if l, ok := rd.(lenner); ok {
|
||||
debug.Log("reader is %#T, using nopCloserLenner{}", rd)
|
||||
rd = nopCloserLenner{l}
|
||||
} else {
|
||||
debug.Log("reader is %#T, no specific workaround enabled", rd)
|
||||
}
|
||||
rd = ioutil.NopCloser(rd)
|
||||
|
||||
be.sem.GetToken()
|
||||
debug.Log("PutObject(%v, %v)", be.cfg.Bucket, objName)
|
||||
@@ -453,10 +418,26 @@ func (be *Backend) Rename(h restic.Handle, l backend.Layout) error {
|
||||
oldname := be.Filename(h)
|
||||
newname := l.Filename(h)
|
||||
|
||||
if oldname == newname {
|
||||
debug.Log(" %v is already renamed", newname)
|
||||
return nil
|
||||
}
|
||||
|
||||
debug.Log(" %v -> %v", oldname, newname)
|
||||
|
||||
coreClient := minio.Core{Client: be.client}
|
||||
err := coreClient.CopyObject(be.cfg.Bucket, newname, path.Join(be.cfg.Bucket, oldname), minio.CopyConditions{})
|
||||
src := minio.NewSourceInfo(be.cfg.Bucket, oldname, nil)
|
||||
|
||||
dst, err := minio.NewDestinationInfo(be.cfg.Bucket, newname, nil, nil)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "NewDestinationInfo")
|
||||
}
|
||||
|
||||
err = be.client.CopyObject(dst, src)
|
||||
if err != nil && be.IsNotExist(err) {
|
||||
debug.Log("copy failed: %v, seems to already have been renamed", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
debug.Log("copy failed: %v", err)
|
||||
return err
|
||||
|
||||
@@ -126,11 +126,56 @@ func Open(cfg Config) (*SFTP, error) {
|
||||
|
||||
debug.Log("layout: %v\n", sftp.Layout)
|
||||
|
||||
if err := sftp.checkDataSubdirs(); err != nil {
|
||||
debug.Log("checkDataSubdirs returned %v", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sftp.Config = cfg
|
||||
sftp.p = cfg.Path
|
||||
return sftp, nil
|
||||
}
|
||||
|
||||
func (r *SFTP) checkDataSubdirs() error {
|
||||
datadir := r.Dirname(restic.Handle{Type: restic.DataFile})
|
||||
|
||||
// check if all paths for data/ exist
|
||||
entries, err := r.c.ReadDir(datadir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
subdirs := make(map[string]struct{}, len(entries))
|
||||
for _, entry := range entries {
|
||||
subdirs[entry.Name()] = struct{}{}
|
||||
}
|
||||
|
||||
for i := 0; i < 256; i++ {
|
||||
subdir := fmt.Sprintf("%02x", i)
|
||||
if _, ok := subdirs[subdir]; !ok {
|
||||
debug.Log("subdir %v is missing, creating", subdir)
|
||||
err := r.mkdirAll(path.Join(datadir, subdir), backend.Modes.Dir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *SFTP) mkdirAllDataSubdirs() error {
|
||||
for _, d := range r.Paths() {
|
||||
err := r.mkdirAll(d, backend.Modes.Dir)
|
||||
debug.Log("mkdirAll %v -> %v", d, err)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Join combines path components with slashes (according to the sftp spec).
|
||||
func (r *SFTP) Join(p ...string) string {
|
||||
return path.Join(p...)
|
||||
@@ -202,12 +247,8 @@ func Create(cfg Config) (*SFTP, error) {
|
||||
}
|
||||
|
||||
// create paths for data and refs
|
||||
for _, d := range sftp.Paths() {
|
||||
err = sftp.mkdirAll(d, backend.Modes.Dir)
|
||||
debug.Log("mkdirAll %v -> %v", d, err)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err = sftp.mkdirAllDataSubdirs(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = sftp.Close()
|
||||
|
||||
@@ -90,16 +90,6 @@ func Open(cfg Config) (restic.Backend, error) {
|
||||
return nil, errors.Wrap(err, "conn.Container")
|
||||
}
|
||||
|
||||
// check that the server supports byte ranges
|
||||
_, hdr, err := be.conn.Account()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Account()")
|
||||
}
|
||||
|
||||
if hdr["Accept-Ranges"] != "bytes" {
|
||||
return nil, errors.New("backend does not support byte range")
|
||||
}
|
||||
|
||||
return be, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -20,8 +20,8 @@ func newSwiftTestSuite(t testing.TB) *test.Suite {
|
||||
// do not use excessive data
|
||||
MinimalData: true,
|
||||
|
||||
// wait for removals for at least 20s
|
||||
WaitForDelayedRemoval: 20 * time.Second,
|
||||
// wait for removals for at least 60s
|
||||
WaitForDelayedRemoval: 60 * time.Second,
|
||||
|
||||
// NewConfig returns a config for a new temporary backend that will be used in tests.
|
||||
NewConfig: func() (interface{}, error) {
|
||||
|
||||
@@ -330,7 +330,7 @@ func (s *Suite) TestSave(t *testing.T) {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
err = delayedRemove(t, b, h, s.WaitForDelayedRemoval)
|
||||
err = delayedRemove(t, b, s.WaitForDelayedRemoval, h)
|
||||
if err != nil {
|
||||
t.Fatalf("error removing item: %+v", err)
|
||||
}
|
||||
@@ -435,28 +435,39 @@ func testLoad(b restic.Backend, h restic.Handle, length int, offset int64) error
|
||||
return err
|
||||
}
|
||||
|
||||
func delayedRemove(t testing.TB, be restic.Backend, h restic.Handle, maxwait time.Duration) error {
|
||||
func delayedRemove(t testing.TB, be restic.Backend, maxwait time.Duration, handles ...restic.Handle) error {
|
||||
// Some backend (swift, I'm looking at you) may implement delayed
|
||||
// removal of data. Let's wait a bit if this happens.
|
||||
err := be.Remove(context.TODO(), h)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
start := time.Now()
|
||||
attempt := 0
|
||||
for time.Since(start) <= maxwait {
|
||||
found, err := be.Test(context.TODO(), h)
|
||||
for _, h := range handles {
|
||||
err := be.Remove(context.TODO(), h)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if !found {
|
||||
break
|
||||
for _, h := range handles {
|
||||
start := time.Now()
|
||||
attempt := 0
|
||||
var found bool
|
||||
var err error
|
||||
for time.Since(start) <= maxwait {
|
||||
found, err = be.Test(context.TODO(), h)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !found {
|
||||
break
|
||||
}
|
||||
|
||||
time.Sleep(2 * time.Second)
|
||||
attempt++
|
||||
}
|
||||
|
||||
time.Sleep(500 * time.Millisecond)
|
||||
attempt++
|
||||
if found {
|
||||
t.Fatalf("removed blob %v still present after %v (%d attempts)", h, time.Since(start), attempt)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -552,7 +563,7 @@ func (s *Suite) TestBackend(t *testing.T) {
|
||||
test.Assert(t, err != nil, "expected error for %v, got %v", h, err)
|
||||
|
||||
// remove and recreate
|
||||
err = delayedRemove(t, b, h, s.WaitForDelayedRemoval)
|
||||
err = delayedRemove(t, b, s.WaitForDelayedRemoval, h)
|
||||
test.OK(t, err)
|
||||
|
||||
// test that the blob is gone
|
||||
@@ -587,6 +598,7 @@ func (s *Suite) TestBackend(t *testing.T) {
|
||||
|
||||
// remove content if requested
|
||||
if test.TestCleanupTempDirs {
|
||||
var handles []restic.Handle
|
||||
for _, ts := range testStrings {
|
||||
id, err := restic.ParseID(ts.id)
|
||||
test.OK(t, err)
|
||||
@@ -597,12 +609,10 @@ func (s *Suite) TestBackend(t *testing.T) {
|
||||
test.OK(t, err)
|
||||
test.Assert(t, found, fmt.Sprintf("id %q not found", id))
|
||||
|
||||
test.OK(t, delayedRemove(t, b, h, s.WaitForDelayedRemoval))
|
||||
|
||||
found, err = b.Test(context.TODO(), h)
|
||||
test.OK(t, err)
|
||||
test.Assert(t, !found, fmt.Sprintf("id %q found after removal", id))
|
||||
handles = append(handles, h)
|
||||
}
|
||||
|
||||
test.OK(t, delayedRemove(t, b, s.WaitForDelayedRemoval, handles...))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -19,11 +19,6 @@ type File interface {
|
||||
Stat() (os.FileInfo, error)
|
||||
}
|
||||
|
||||
// Chmod changes the mode of the named file to mode.
|
||||
func Chmod(name string, mode os.FileMode) error {
|
||||
return os.Chmod(fixpath(name), mode)
|
||||
}
|
||||
|
||||
// Mkdir creates a new directory with the specified name and permission bits.
|
||||
// If there is an error, it will be of type *PathError.
|
||||
func Mkdir(name string, perm os.FileMode) error {
|
||||
|
||||
@@ -5,6 +5,7 @@ package fs
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// fixpath returns an absolute path on windows, so restic can open long file
|
||||
@@ -35,3 +36,23 @@ func TempFile(dir, prefix string) (f *os.File, err error) {
|
||||
|
||||
return f, nil
|
||||
}
|
||||
|
||||
// isNotSuported returns true if the error is caused by an unsupported file system feature.
|
||||
func isNotSupported(err error) bool {
|
||||
if perr, ok := err.(*os.PathError); ok && perr.Err == syscall.ENOTSUP {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Chmod changes the mode of the named file to mode.
|
||||
func Chmod(name string, mode os.FileMode) error {
|
||||
err := os.Chmod(fixpath(name), mode)
|
||||
|
||||
// ignore the error if the FS does not support setting this mode (e.g. CIFS with gvfs on Linux)
|
||||
if err != nil && isNotSupported(err) {
|
||||
return nil
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -91,3 +91,8 @@ func MkdirAll(path string, perm os.FileMode) error {
|
||||
func TempFile(dir, prefix string) (f *os.File, err error) {
|
||||
return ioutil.TempFile(dir, prefix)
|
||||
}
|
||||
|
||||
// Chmod changes the mode of the named file to mode.
|
||||
func Chmod(name string, mode os.FileMode) error {
|
||||
return os.Chmod(fixpath(name), mode)
|
||||
}
|
||||
|
||||
@@ -16,7 +16,7 @@ import (
|
||||
type Config struct {
|
||||
OwnerIsRoot bool
|
||||
Host string
|
||||
Tags []string
|
||||
Tags []restic.TagList
|
||||
Paths []string
|
||||
}
|
||||
|
||||
|
||||
@@ -328,7 +328,7 @@ func TestIndexAddRemovePack(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// example index serialization from doc/Design.md
|
||||
// example index serialization from doc/Design.rst
|
||||
var docExample = []byte(`
|
||||
{
|
||||
"supersedes": [
|
||||
|
||||
@@ -2,6 +2,8 @@ package migrations
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"restic"
|
||||
"restic/backend"
|
||||
@@ -34,13 +36,35 @@ func (m *S3Layout) Check(ctx context.Context, repo restic.Repository) (bool, err
|
||||
return true, nil
|
||||
}
|
||||
|
||||
func retry(max int, fail func(err error), f func() error) error {
|
||||
var err error
|
||||
for i := 0; i < max; i++ {
|
||||
err = f()
|
||||
if err == nil {
|
||||
return err
|
||||
}
|
||||
if fail != nil {
|
||||
fail(err)
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// maxErrors for retrying renames on s3.
|
||||
const maxErrors = 20
|
||||
|
||||
func (m *S3Layout) moveFiles(ctx context.Context, be *s3.Backend, l backend.Layout, t restic.FileType) error {
|
||||
printErr := func(err error) {
|
||||
fmt.Fprintf(os.Stderr, "renaming file returned error: %v\n", err)
|
||||
}
|
||||
|
||||
for name := range be.List(ctx, t) {
|
||||
h := restic.Handle{Type: t, Name: name}
|
||||
debug.Log("move %v", h)
|
||||
if err := be.Rename(h, l); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
retry(maxErrors, printErr, func() error {
|
||||
return be.Rename(h, l)
|
||||
})
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -54,15 +78,22 @@ func (m *S3Layout) Apply(ctx context.Context, repo restic.Repository) error {
|
||||
return errors.New("backend is not s3")
|
||||
}
|
||||
|
||||
oldLayout := &backend.S3LegacyLayout{
|
||||
Path: be.Path(),
|
||||
Join: path.Join,
|
||||
}
|
||||
|
||||
newLayout := &backend.DefaultLayout{
|
||||
Path: be.Path(),
|
||||
Join: path.Join,
|
||||
}
|
||||
|
||||
be.Layout = oldLayout
|
||||
|
||||
for _, t := range []restic.FileType{
|
||||
restic.KeyFile,
|
||||
restic.SnapshotFile,
|
||||
restic.DataFile,
|
||||
restic.KeyFile,
|
||||
restic.LockFile,
|
||||
} {
|
||||
err := m.moveFiles(ctx, be, newLayout, t)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
// +build !windows
|
||||
// +build !windows,!darwin
|
||||
|
||||
package restic
|
||||
|
||||
|
||||
23
src/restic/progress_unix_with_siginfo.go
Normal file
23
src/restic/progress_unix_with_siginfo.go
Normal file
@@ -0,0 +1,23 @@
|
||||
// +build darwin
|
||||
|
||||
package restic
|
||||
|
||||
import (
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
|
||||
"restic/debug"
|
||||
)
|
||||
|
||||
func init() {
|
||||
c := make(chan os.Signal)
|
||||
signal.Notify(c, syscall.SIGUSR1)
|
||||
signal.Notify(c, syscall.SIGINFO)
|
||||
go func() {
|
||||
for s := range c {
|
||||
debug.Log("Signal received: %v\n", s)
|
||||
forceUpdateProgress <- true
|
||||
}
|
||||
}()
|
||||
}
|
||||
@@ -195,7 +195,7 @@ func TestIndexSize(t *testing.T) {
|
||||
t.Logf("Index file size for %d blobs in %d packs is %d", blobs*packs, packs, wr.Len())
|
||||
}
|
||||
|
||||
// example index serialization from doc/Design.md
|
||||
// example index serialization from doc/Design.rst
|
||||
var docExample = []byte(`
|
||||
{
|
||||
"supersedes": [
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"fmt"
|
||||
"os/user"
|
||||
"path/filepath"
|
||||
"restic/debug"
|
||||
"time"
|
||||
)
|
||||
|
||||
@@ -130,46 +131,65 @@ func (sn *Snapshot) RemoveTags(removeTags []string) (changed bool) {
|
||||
return
|
||||
}
|
||||
|
||||
// HasTags returns true if the snapshot has at least all of tags.
|
||||
func (sn *Snapshot) HasTags(tags []string) bool {
|
||||
nextTag:
|
||||
for _, tag := range tags {
|
||||
for _, snTag := range sn.Tags {
|
||||
if tag == snTag {
|
||||
continue nextTag
|
||||
}
|
||||
func (sn *Snapshot) hasTag(tag string) bool {
|
||||
for _, snTag := range sn.Tags {
|
||||
if tag == snTag {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
return false
|
||||
// HasTags returns true if the snapshot has all the tags in l.
|
||||
func (sn *Snapshot) HasTags(l []string) bool {
|
||||
for _, tag := range l {
|
||||
if !sn.hasTag(tag) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// HasPaths returns true if the snapshot has at least all of paths.
|
||||
// HasTagList returns true if the snapshot satisfies at least one TagList,
|
||||
// so there is a TagList in l for which all tags are included in sn.
|
||||
func (sn *Snapshot) HasTagList(l []TagList) bool {
|
||||
debug.Log("testing snapshot with tags %v against list: %v", sn.Tags, l)
|
||||
|
||||
if len(l) == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
for _, tags := range l {
|
||||
if sn.HasTags(tags) {
|
||||
debug.Log(" snapshot satisfies %v", tags, l)
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func (sn *Snapshot) hasPath(path string) bool {
|
||||
for _, snPath := range sn.Paths {
|
||||
if path == snPath {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// HasPaths returns true if the snapshot has all of the paths.
|
||||
func (sn *Snapshot) HasPaths(paths []string) bool {
|
||||
nextPath:
|
||||
for _, path := range paths {
|
||||
for _, snPath := range sn.Paths {
|
||||
if path == snPath {
|
||||
continue nextPath
|
||||
}
|
||||
if !sn.hasPath(path) {
|
||||
return false
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// SamePaths returns true if the snapshot matches the entire paths set
|
||||
func (sn *Snapshot) SamePaths(paths []string) bool {
|
||||
if len(sn.Paths) != len(paths) {
|
||||
return false
|
||||
}
|
||||
return sn.HasPaths(paths)
|
||||
}
|
||||
|
||||
// Snapshots is a list of snapshots.
|
||||
type Snapshots []*Snapshot
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ import (
|
||||
var ErrNoSnapshotFound = errors.New("no snapshot found")
|
||||
|
||||
// FindLatestSnapshot finds latest snapshot with optional target/directory, tags and hostname filters.
|
||||
func FindLatestSnapshot(ctx context.Context, repo Repository, targets []string, tags []string, hostname string) (ID, error) {
|
||||
func FindLatestSnapshot(ctx context.Context, repo Repository, targets []string, tagLists []TagList, hostname string) (ID, error) {
|
||||
var (
|
||||
latest time.Time
|
||||
latestID ID
|
||||
@@ -24,11 +24,21 @@ func FindLatestSnapshot(ctx context.Context, repo Repository, targets []string,
|
||||
if err != nil {
|
||||
return ID{}, errors.Errorf("Error listing snapshot: %v", err)
|
||||
}
|
||||
if snapshot.Time.After(latest) && (hostname == "" || hostname == snapshot.Hostname) && snapshot.HasTags(tags) && snapshot.HasPaths(targets) {
|
||||
latest = snapshot.Time
|
||||
latestID = snapshotID
|
||||
found = true
|
||||
if snapshot.Time.Before(latest) || (hostname != "" && hostname != snapshot.Hostname) {
|
||||
continue
|
||||
}
|
||||
|
||||
if !snapshot.HasTagList(tagLists) {
|
||||
continue
|
||||
}
|
||||
|
||||
if !snapshot.HasPaths(targets) {
|
||||
continue
|
||||
}
|
||||
|
||||
latest = snapshot.Time
|
||||
latestID = snapshotID
|
||||
found = true
|
||||
}
|
||||
|
||||
if !found {
|
||||
@@ -53,7 +63,7 @@ func FindSnapshot(repo Repository, s string) (ID, error) {
|
||||
|
||||
// FindFilteredSnapshots yields Snapshots filtered from the list of all
|
||||
// snapshots.
|
||||
func FindFilteredSnapshots(ctx context.Context, repo Repository, host string, tags []string, paths []string) Snapshots {
|
||||
func FindFilteredSnapshots(ctx context.Context, repo Repository, host string, tags []TagList, paths []string) Snapshots {
|
||||
results := make(Snapshots, 0, 20)
|
||||
|
||||
for id := range repo.List(ctx, SnapshotFile) {
|
||||
@@ -62,7 +72,7 @@ func FindFilteredSnapshots(ctx context.Context, repo Repository, host string, ta
|
||||
fmt.Fprintf(os.Stderr, "could not load snapshot %v: %v\n", id.Str(), err)
|
||||
continue
|
||||
}
|
||||
if (host != "" && host != sn.Hostname) || !sn.HasTags(tags) || !sn.HasPaths(paths) {
|
||||
if (host != "" && host != sn.Hostname) || !sn.HasTagList(tags) || !sn.HasPaths(paths) {
|
||||
continue
|
||||
}
|
||||
|
||||
|
||||
@@ -8,13 +8,13 @@ import (
|
||||
|
||||
// ExpirePolicy configures which snapshots should be automatically removed.
|
||||
type ExpirePolicy struct {
|
||||
Last int // keep the last n snapshots
|
||||
Hourly int // keep the last n hourly snapshots
|
||||
Daily int // keep the last n daily snapshots
|
||||
Weekly int // keep the last n weekly snapshots
|
||||
Monthly int // keep the last n monthly snapshots
|
||||
Yearly int // keep the last n yearly snapshots
|
||||
Tags []string // keep all snapshots with these tags
|
||||
Last int // keep the last n snapshots
|
||||
Hourly int // keep the last n hourly snapshots
|
||||
Daily int // keep the last n daily snapshots
|
||||
Weekly int // keep the last n weekly snapshots
|
||||
Monthly int // keep the last n monthly snapshots
|
||||
Yearly int // keep the last n yearly snapshots
|
||||
Tags []TagList // keep all snapshots that include at least one of the tag lists.
|
||||
}
|
||||
|
||||
// Sum returns the maximum number of snapshots to be kept according to this
|
||||
@@ -94,11 +94,12 @@ func ApplyPolicy(list Snapshots, p ExpirePolicy) (keep, remove Snapshots) {
|
||||
var keepSnap bool
|
||||
|
||||
// Tags are handled specially as they are not counted.
|
||||
if len(p.Tags) > 0 {
|
||||
if cur.HasTags(p.Tags) {
|
||||
for _, l := range p.Tags {
|
||||
if cur.HasTags(l) {
|
||||
keepSnap = true
|
||||
}
|
||||
}
|
||||
|
||||
// Now update the other buckets and see if they have some counts left.
|
||||
for i, b := range buckets {
|
||||
if b.Count > 0 {
|
||||
|
||||
@@ -27,7 +27,7 @@ func TestExpireSnapshotOps(t *testing.T) {
|
||||
p *restic.ExpirePolicy
|
||||
}{
|
||||
{true, 0, &restic.ExpirePolicy{}},
|
||||
{true, 0, &restic.ExpirePolicy{Tags: []string{}}},
|
||||
{true, 0, &restic.ExpirePolicy{Tags: []restic.TagList{}}},
|
||||
{false, 22, &restic.ExpirePolicy{Daily: 7, Weekly: 2, Monthly: 3, Yearly: 10}},
|
||||
}
|
||||
for i, d := range data {
|
||||
@@ -163,8 +163,9 @@ var expireTests = []restic.ExpirePolicy{
|
||||
{Daily: 2, Weekly: 2, Monthly: 6},
|
||||
{Yearly: 10},
|
||||
{Daily: 7, Weekly: 2, Monthly: 3, Yearly: 10},
|
||||
{Tags: []string{"foo"}},
|
||||
{Tags: []string{"foo", "bar"}},
|
||||
{Tags: []restic.TagList{{"foo"}}},
|
||||
{Tags: []restic.TagList{{"foo", "bar"}}},
|
||||
{Tags: []restic.TagList{{"foo"}, {"bar"}}},
|
||||
}
|
||||
|
||||
func TestApplyPolicy(t *testing.T) {
|
||||
62
src/restic/tag_list.go
Normal file
62
src/restic/tag_list.go
Normal file
@@ -0,0 +1,62 @@
|
||||
package restic
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// TagList is a list of tags.
|
||||
type TagList []string
|
||||
|
||||
// splitTagList splits a string into a list of tags. The tags in the string
|
||||
// need to be separated by commas. Whitespace is stripped around the individual
|
||||
// tags.
|
||||
func splitTagList(s string) (l TagList) {
|
||||
for _, t := range strings.Split(s, ",") {
|
||||
l = append(l, strings.TrimSpace(t))
|
||||
}
|
||||
return l
|
||||
}
|
||||
|
||||
func (l TagList) String() string {
|
||||
return "[" + strings.Join(l, ", ") + "]"
|
||||
}
|
||||
|
||||
// Set updates the TagList's value.
|
||||
func (l *TagList) Set(s string) error {
|
||||
*l = splitTagList(s)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Type returns a description of the type.
|
||||
func (TagList) Type() string {
|
||||
return "TagList"
|
||||
}
|
||||
|
||||
// TagLists consists of several TagList.
|
||||
type TagLists []TagList
|
||||
|
||||
// splitTagLists splits a slice of strings into a slice of TagLists using
|
||||
// SplitTagList.
|
||||
func splitTagLists(s []string) (l TagLists) {
|
||||
l = make([]TagList, 0, len(s))
|
||||
for _, t := range s {
|
||||
l = append(l, splitTagList(t))
|
||||
}
|
||||
return l
|
||||
}
|
||||
|
||||
func (l TagLists) String() string {
|
||||
return fmt.Sprintf("%v", []TagList(l))
|
||||
}
|
||||
|
||||
// Set updates the TagList's value.
|
||||
func (l *TagLists) Set(s string) error {
|
||||
*l = append(*l, splitTagList(s))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Type returns a description of the type.
|
||||
func (TagLists) Type() string {
|
||||
return "TagLists"
|
||||
}
|
||||
131
src/restic/testdata/policy_keep_snapshots_20
vendored
Normal file
131
src/restic/testdata/policy_keep_snapshots_20
vendored
Normal file
@@ -0,0 +1,131 @@
|
||||
[
|
||||
{
|
||||
"time": "2014-11-15T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo",
|
||||
"bar"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-11-13T10:20:30.1Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"bar"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-11-13T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-11-12T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-11-10T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-11-08T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-10-22T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-10-20T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-10-11T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-10-10T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-10-09T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-10-08T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-10-06T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-10-05T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-10-02T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
},
|
||||
{
|
||||
"time": "2014-10-01T10:20:30Z",
|
||||
"tree": null,
|
||||
"paths": null,
|
||||
"tags": [
|
||||
"foo"
|
||||
]
|
||||
}
|
||||
]
|
||||
52
vendor/manifest
vendored
52
vendor/manifest
vendored
@@ -10,13 +10,13 @@
|
||||
{
|
||||
"importpath": "github.com/elithrar/simple-scrypt",
|
||||
"repository": "https://github.com/elithrar/simple-scrypt",
|
||||
"revision": "2325946f714c95de4a6088202c402fbdfa64163b",
|
||||
"revision": "6724715de445c2e70cdafb7a1a14c8cfe0984210",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/go-ini/ini",
|
||||
"repository": "https://github.com/go-ini/ini",
|
||||
"revision": "e7fea39b01aea8d5671f6858f0532f56e8bff3a5",
|
||||
"revision": "3d73f4b845efdf9989fffd4b4e562727744a34ba",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
@@ -34,122 +34,122 @@
|
||||
{
|
||||
"importpath": "github.com/kurin/blazer",
|
||||
"repository": "https://github.com/kurin/blazer",
|
||||
"revision": "d1b9d31c8641e46f2651fe564ee9ddb857c1ed29",
|
||||
"revision": "612082ed2430716569f1ec816fc6ade849020816",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/minio/go-homedir",
|
||||
"repository": "https://github.com/minio/go-homedir",
|
||||
"revision": "0b1069c753c94b3633cc06a1995252dbcc27c7a6",
|
||||
"revision": "21304a94172ae3a09dee2cd86a12fb6f842138c7",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/minio/minio-go",
|
||||
"repository": "https://github.com/minio/minio-go",
|
||||
"revision": "fe53a65ebc43b5d22626b29a19a3de81170e42d3",
|
||||
"branch": "master"
|
||||
"revision": "5ca66c9a35ba1cd674484be99dc97aa0973afe12",
|
||||
"branch": "HEAD"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/ncw/swift",
|
||||
"repository": "https://github.com/ncw/swift",
|
||||
"revision": "bf51ccd3b5c3a1f12ac762b4511c5f9f1ce6b26f",
|
||||
"revision": "9e6fdb8957a022d5780a78b58d6181c3580bb01f",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/pkg/errors",
|
||||
"repository": "https://github.com/pkg/errors",
|
||||
"revision": "645ef00459ed84a119197bfb8d8205042c6df63d",
|
||||
"branch": "HEAD"
|
||||
"revision": "c605e284fe17294bda444b34710735b29d1a9d90",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/pkg/profile",
|
||||
"repository": "https://github.com/pkg/profile",
|
||||
"revision": "1c16f117a3ab788fdf0e334e623b8bccf5679866",
|
||||
"branch": "HEAD"
|
||||
"revision": "5b67d428864e92711fcbd2f8629456121a56d91f",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/pkg/sftp",
|
||||
"repository": "https://github.com/pkg/sftp",
|
||||
"revision": "8197a2e580736b78d704be0fc47b2324c0591a32",
|
||||
"revision": "314a5ccb89b21e053d7d96c3a706eacaf2b18231",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/pkg/xattr",
|
||||
"repository": "https://github.com/pkg/xattr",
|
||||
"revision": "858d49c224b241ba9393e20f521f6a76f52dd482",
|
||||
"branch": "HEAD"
|
||||
"revision": "2c7218aab2e9980561010ef420b53d948749deaf",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/restic/chunker",
|
||||
"repository": "https://github.com/restic/chunker",
|
||||
"revision": "bb2ecf9a98e35a0b336ffc23fc515fb6e7961577",
|
||||
"branch": "HEAD"
|
||||
"revision": "1542d55ca53d2d8d7b38e890f7a4be90014356af",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/spf13/cobra",
|
||||
"repository": "https://github.com/spf13/cobra",
|
||||
"revision": "10f6b9d7e1631a54ad07c5c0fb71c28a1abfd3c2",
|
||||
"revision": "d994347edadc56d6a7f863775fb6887606685ae6",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "github.com/spf13/pflag",
|
||||
"repository": "https://github.com/spf13/pflag",
|
||||
"revision": "2300d0f8576fe575f71aaa5b9bbe4e1b0dc2eb51",
|
||||
"revision": "e57e3eeb33f795204c1ca35f56c44f83227c6e66",
|
||||
"branch": "master"
|
||||
},
|
||||
{
|
||||
"importpath": "golang.org/x/crypto/curve25519",
|
||||
"repository": "https://go.googlesource.com/crypto",
|
||||
"revision": "efac7f277b17c19894091e358c6130cb6bd51117",
|
||||
"revision": "7f7c0c2d75ebb4e32a21396ce36e87b6dadc91c9",
|
||||
"branch": "master",
|
||||
"path": "/curve25519"
|
||||
},
|
||||
{
|
||||
"importpath": "golang.org/x/crypto/ed25519",
|
||||
"repository": "https://go.googlesource.com/crypto",
|
||||
"revision": "efac7f277b17c19894091e358c6130cb6bd51117",
|
||||
"revision": "7f7c0c2d75ebb4e32a21396ce36e87b6dadc91c9",
|
||||
"branch": "master",
|
||||
"path": "/ed25519"
|
||||
},
|
||||
{
|
||||
"importpath": "golang.org/x/crypto/pbkdf2",
|
||||
"repository": "https://go.googlesource.com/crypto",
|
||||
"revision": "efac7f277b17c19894091e358c6130cb6bd51117",
|
||||
"revision": "7f7c0c2d75ebb4e32a21396ce36e87b6dadc91c9",
|
||||
"branch": "master",
|
||||
"path": "/pbkdf2"
|
||||
},
|
||||
{
|
||||
"importpath": "golang.org/x/crypto/poly1305",
|
||||
"repository": "https://go.googlesource.com/crypto",
|
||||
"revision": "efac7f277b17c19894091e358c6130cb6bd51117",
|
||||
"revision": "7f7c0c2d75ebb4e32a21396ce36e87b6dadc91c9",
|
||||
"branch": "master",
|
||||
"path": "/poly1305"
|
||||
},
|
||||
{
|
||||
"importpath": "golang.org/x/crypto/scrypt",
|
||||
"repository": "https://go.googlesource.com/crypto",
|
||||
"revision": "efac7f277b17c19894091e358c6130cb6bd51117",
|
||||
"revision": "7f7c0c2d75ebb4e32a21396ce36e87b6dadc91c9",
|
||||
"branch": "master",
|
||||
"path": "/scrypt"
|
||||
},
|
||||
{
|
||||
"importpath": "golang.org/x/crypto/ssh",
|
||||
"repository": "https://go.googlesource.com/crypto",
|
||||
"revision": "efac7f277b17c19894091e358c6130cb6bd51117",
|
||||
"revision": "7f7c0c2d75ebb4e32a21396ce36e87b6dadc91c9",
|
||||
"branch": "master",
|
||||
"path": "/ssh"
|
||||
},
|
||||
{
|
||||
"importpath": "golang.org/x/net/context",
|
||||
"repository": "https://go.googlesource.com/net",
|
||||
"revision": "5602c733f70afc6dcec6766be0d5034d4c4f14de",
|
||||
"revision": "b3756b4b77d7b13260a0a2ec658753cf48922eac",
|
||||
"branch": "master",
|
||||
"path": "/context"
|
||||
},
|
||||
{
|
||||
"importpath": "golang.org/x/sys/unix",
|
||||
"repository": "https://go.googlesource.com/sys",
|
||||
"revision": "f3918c30c5c2cb527c0b071a27c35120a6c0719a",
|
||||
"revision": "4cd6d1a821c7175768725b55ca82f14683a29ea4",
|
||||
"branch": "master",
|
||||
"path": "/unix"
|
||||
}
|
||||
|
||||
80
vendor/src/github.com/elithrar/simple-scrypt/compositor.json
vendored
Normal file
80
vendor/src/github.com/elithrar/simple-scrypt/compositor.json
vendored
Normal file
File diff suppressed because one or more lines are too long
10
vendor/src/github.com/go-ini/ini/README.md
vendored
10
vendor/src/github.com/go-ini/ini/README.md
vendored
@@ -83,8 +83,8 @@ sec1, err := cfg.GetSection("Section")
|
||||
sec2, err := cfg.GetSection("SecTIOn")
|
||||
|
||||
// key1 and key2 are the exactly same key object
|
||||
key1, err := cfg.GetKey("Key")
|
||||
key2, err := cfg.GetKey("KeY")
|
||||
key1, err := sec1.GetKey("Key")
|
||||
key2, err := sec2.GetKey("KeY")
|
||||
```
|
||||
|
||||
#### MySQL-like boolean key
|
||||
@@ -122,6 +122,12 @@ Take care that following format will be treated as comment:
|
||||
|
||||
If you want to save a value with `#` or `;`, please quote them with ``` ` ``` or ``` """ ```.
|
||||
|
||||
Alternatively, you can use following `LoadOptions` to completely ignore inline comments:
|
||||
|
||||
```go
|
||||
cfg, err := LoadSources(LoadOptions{IgnoreInlineComment: true}, "app.ini"))
|
||||
```
|
||||
|
||||
### Working with sections
|
||||
|
||||
To get a section, you would need to:
|
||||
|
||||
10
vendor/src/github.com/go-ini/ini/README_ZH.md
vendored
10
vendor/src/github.com/go-ini/ini/README_ZH.md
vendored
@@ -76,8 +76,8 @@ sec1, err := cfg.GetSection("Section")
|
||||
sec2, err := cfg.GetSection("SecTIOn")
|
||||
|
||||
// key1 和 key2 指向同一个键对象
|
||||
key1, err := cfg.GetKey("Key")
|
||||
key2, err := cfg.GetKey("KeY")
|
||||
key1, err := sec1.GetKey("Key")
|
||||
key2, err := sec2.GetKey("KeY")
|
||||
```
|
||||
|
||||
#### 类似 MySQL 配置中的布尔值键
|
||||
@@ -115,6 +115,12 @@ key, err := sec.NewBooleanKey("skip-host-cache")
|
||||
|
||||
如果你希望使用包含 `#` 或 `;` 的值,请使用 ``` ` ``` 或 ``` """ ``` 进行包覆。
|
||||
|
||||
除此之外,您还可以通过 `LoadOptions` 完全忽略行内注释:
|
||||
|
||||
```go
|
||||
cfg, err := LoadSources(LoadOptions{IgnoreInlineComment: true}, "app.ini"))
|
||||
```
|
||||
|
||||
### 操作分区(Section)
|
||||
|
||||
获取指定分区:
|
||||
|
||||
15
vendor/src/github.com/go-ini/ini/ini.go
vendored
15
vendor/src/github.com/go-ini/ini/ini.go
vendored
@@ -37,7 +37,7 @@ const (
|
||||
|
||||
// Maximum allowed depth when recursively substituing variable names.
|
||||
_DEPTH_VALUES = 99
|
||||
_VERSION = "1.27.0"
|
||||
_VERSION = "1.28.1"
|
||||
)
|
||||
|
||||
// Version returns current package version literal.
|
||||
@@ -60,6 +60,9 @@ var (
|
||||
|
||||
// Explicitly write DEFAULT section header
|
||||
DefaultHeader = false
|
||||
|
||||
// Indicate whether to put a line between sections
|
||||
PrettySection = true
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -504,7 +507,7 @@ func (f *File) WriteToIndent(w io.Writer, indent string) (n int64, err error) {
|
||||
// In case key value contains "\n", "`", "\"", "#" or ";"
|
||||
if strings.ContainsAny(val, "\n`") {
|
||||
val = `"""` + val + `"""`
|
||||
} else if strings.ContainsAny(val, "#;") {
|
||||
} else if !f.options.IgnoreInlineComment && strings.ContainsAny(val, "#;") {
|
||||
val = "`" + val + "`"
|
||||
}
|
||||
if _, err = buf.WriteString(equalSign + val + LineBreak); err != nil {
|
||||
@@ -513,9 +516,11 @@ func (f *File) WriteToIndent(w io.Writer, indent string) (n int64, err error) {
|
||||
}
|
||||
}
|
||||
|
||||
// Put a line between sections
|
||||
if _, err = buf.WriteString(LineBreak); err != nil {
|
||||
return 0, err
|
||||
if PrettySection {
|
||||
// Put a line between sections
|
||||
if _, err = buf.WriteString(LineBreak); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
7
vendor/src/github.com/go-ini/ini/ini_test.go
vendored
7
vendor/src/github.com/go-ini/ini/ini_test.go
vendored
@@ -215,6 +215,13 @@ key2=value #comment2`))
|
||||
|
||||
So(cfg.Section("").Key("key1").String(), ShouldEqual, `value ;comment`)
|
||||
So(cfg.Section("").Key("key2").String(), ShouldEqual, `value #comment2`)
|
||||
|
||||
var buf bytes.Buffer
|
||||
cfg.WriteTo(&buf)
|
||||
So(buf.String(), ShouldEqual, `key1 = value ;comment
|
||||
key2 = value #comment2
|
||||
|
||||
`)
|
||||
})
|
||||
|
||||
Convey("Load with boolean type keys", t, func() {
|
||||
|
||||
2
vendor/src/github.com/go-ini/ini/parser.go
vendored
2
vendor/src/github.com/go-ini/ini/parser.go
vendored
@@ -189,7 +189,7 @@ func (p *parser) readContinuationLines(val string) (string, error) {
|
||||
// are quotes \" or \'.
|
||||
// It returns false if any other parts also contain same kind of quotes.
|
||||
func hasSurroundedQuote(in string, quote byte) bool {
|
||||
return len(in) > 2 && in[0] == quote && in[len(in)-1] == quote &&
|
||||
return len(in) >= 2 && in[0] == quote && in[len(in)-1] == quote &&
|
||||
strings.IndexByte(in[1:], quote) == len(in)-2
|
||||
}
|
||||
|
||||
|
||||
92
vendor/src/github.com/go-ini/ini/struct.go
vendored
92
vendor/src/github.com/go-ini/ini/struct.go
vendored
@@ -78,7 +78,7 @@ func parseDelim(actual string) string {
|
||||
var reflectTime = reflect.TypeOf(time.Now()).Kind()
|
||||
|
||||
// setSliceWithProperType sets proper values to slice based on its type.
|
||||
func setSliceWithProperType(key *Key, field reflect.Value, delim string, allowShadow bool) error {
|
||||
func setSliceWithProperType(key *Key, field reflect.Value, delim string, allowShadow, isStrict bool) error {
|
||||
var strs []string
|
||||
if allowShadow {
|
||||
strs = key.StringsWithShadows(delim)
|
||||
@@ -92,26 +92,30 @@ func setSliceWithProperType(key *Key, field reflect.Value, delim string, allowSh
|
||||
}
|
||||
|
||||
var vals interface{}
|
||||
var err error
|
||||
|
||||
sliceOf := field.Type().Elem().Kind()
|
||||
switch sliceOf {
|
||||
case reflect.String:
|
||||
vals = strs
|
||||
case reflect.Int:
|
||||
vals, _ = key.parseInts(strs, true, false)
|
||||
vals, err = key.parseInts(strs, true, false)
|
||||
case reflect.Int64:
|
||||
vals, _ = key.parseInt64s(strs, true, false)
|
||||
vals, err = key.parseInt64s(strs, true, false)
|
||||
case reflect.Uint:
|
||||
vals, _ = key.parseUints(strs, true, false)
|
||||
vals, err = key.parseUints(strs, true, false)
|
||||
case reflect.Uint64:
|
||||
vals, _ = key.parseUint64s(strs, true, false)
|
||||
vals, err = key.parseUint64s(strs, true, false)
|
||||
case reflect.Float64:
|
||||
vals, _ = key.parseFloat64s(strs, true, false)
|
||||
vals, err = key.parseFloat64s(strs, true, false)
|
||||
case reflectTime:
|
||||
vals, _ = key.parseTimesFormat(time.RFC3339, strs, true, false)
|
||||
vals, err = key.parseTimesFormat(time.RFC3339, strs, true, false)
|
||||
default:
|
||||
return fmt.Errorf("unsupported type '[]%s'", sliceOf)
|
||||
}
|
||||
if isStrict {
|
||||
return err
|
||||
}
|
||||
|
||||
slice := reflect.MakeSlice(field.Type(), numVals, numVals)
|
||||
for i := 0; i < numVals; i++ {
|
||||
@@ -136,10 +140,17 @@ func setSliceWithProperType(key *Key, field reflect.Value, delim string, allowSh
|
||||
return nil
|
||||
}
|
||||
|
||||
func wrapStrictError(err error, isStrict bool) error {
|
||||
if isStrict {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// setWithProperType sets proper value to field based on its type,
|
||||
// but it does not return error for failing parsing,
|
||||
// because we want to use default value that is already assigned to strcut.
|
||||
func setWithProperType(t reflect.Type, key *Key, field reflect.Value, delim string, allowShadow bool) error {
|
||||
func setWithProperType(t reflect.Type, key *Key, field reflect.Value, delim string, allowShadow, isStrict bool) error {
|
||||
switch t.Kind() {
|
||||
case reflect.String:
|
||||
if len(key.String()) == 0 {
|
||||
@@ -149,7 +160,7 @@ func setWithProperType(t reflect.Type, key *Key, field reflect.Value, delim stri
|
||||
case reflect.Bool:
|
||||
boolVal, err := key.Bool()
|
||||
if err != nil {
|
||||
return nil
|
||||
return wrapStrictError(err, isStrict)
|
||||
}
|
||||
field.SetBool(boolVal)
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
@@ -161,8 +172,8 @@ func setWithProperType(t reflect.Type, key *Key, field reflect.Value, delim stri
|
||||
}
|
||||
|
||||
intVal, err := key.Int64()
|
||||
if err != nil || intVal == 0 {
|
||||
return nil
|
||||
if err != nil {
|
||||
return wrapStrictError(err, isStrict)
|
||||
}
|
||||
field.SetInt(intVal)
|
||||
// byte is an alias for uint8, so supporting uint8 breaks support for byte
|
||||
@@ -176,24 +187,24 @@ func setWithProperType(t reflect.Type, key *Key, field reflect.Value, delim stri
|
||||
|
||||
uintVal, err := key.Uint64()
|
||||
if err != nil {
|
||||
return nil
|
||||
return wrapStrictError(err, isStrict)
|
||||
}
|
||||
field.SetUint(uintVal)
|
||||
|
||||
case reflect.Float32, reflect.Float64:
|
||||
floatVal, err := key.Float64()
|
||||
if err != nil {
|
||||
return nil
|
||||
return wrapStrictError(err, isStrict)
|
||||
}
|
||||
field.SetFloat(floatVal)
|
||||
case reflectTime:
|
||||
timeVal, err := key.Time()
|
||||
if err != nil {
|
||||
return nil
|
||||
return wrapStrictError(err, isStrict)
|
||||
}
|
||||
field.Set(reflect.ValueOf(timeVal))
|
||||
case reflect.Slice:
|
||||
return setSliceWithProperType(key, field, delim, allowShadow)
|
||||
return setSliceWithProperType(key, field, delim, allowShadow, isStrict)
|
||||
default:
|
||||
return fmt.Errorf("unsupported type '%s'", t)
|
||||
}
|
||||
@@ -212,7 +223,7 @@ func parseTagOptions(tag string) (rawName string, omitEmpty bool, allowShadow bo
|
||||
return rawName, omitEmpty, allowShadow
|
||||
}
|
||||
|
||||
func (s *Section) mapTo(val reflect.Value) error {
|
||||
func (s *Section) mapTo(val reflect.Value, isStrict bool) error {
|
||||
if val.Kind() == reflect.Ptr {
|
||||
val = val.Elem()
|
||||
}
|
||||
@@ -241,7 +252,7 @@ func (s *Section) mapTo(val reflect.Value) error {
|
||||
|
||||
if isAnonymous || isStruct {
|
||||
if sec, err := s.f.GetSection(fieldName); err == nil {
|
||||
if err = sec.mapTo(field); err != nil {
|
||||
if err = sec.mapTo(field, isStrict); err != nil {
|
||||
return fmt.Errorf("error mapping field(%s): %v", fieldName, err)
|
||||
}
|
||||
continue
|
||||
@@ -250,7 +261,7 @@ func (s *Section) mapTo(val reflect.Value) error {
|
||||
|
||||
if key, err := s.GetKey(fieldName); err == nil {
|
||||
delim := parseDelim(tpField.Tag.Get("delim"))
|
||||
if err = setWithProperType(tpField.Type, key, field, delim, allowShadow); err != nil {
|
||||
if err = setWithProperType(tpField.Type, key, field, delim, allowShadow, isStrict); err != nil {
|
||||
return fmt.Errorf("error mapping field(%s): %v", fieldName, err)
|
||||
}
|
||||
}
|
||||
@@ -269,7 +280,22 @@ func (s *Section) MapTo(v interface{}) error {
|
||||
return errors.New("cannot map to non-pointer struct")
|
||||
}
|
||||
|
||||
return s.mapTo(val)
|
||||
return s.mapTo(val, false)
|
||||
}
|
||||
|
||||
// MapTo maps section to given struct in strict mode,
|
||||
// which returns all possible error including value parsing error.
|
||||
func (s *Section) StrictMapTo(v interface{}) error {
|
||||
typ := reflect.TypeOf(v)
|
||||
val := reflect.ValueOf(v)
|
||||
if typ.Kind() == reflect.Ptr {
|
||||
typ = typ.Elem()
|
||||
val = val.Elem()
|
||||
} else {
|
||||
return errors.New("cannot map to non-pointer struct")
|
||||
}
|
||||
|
||||
return s.mapTo(val, true)
|
||||
}
|
||||
|
||||
// MapTo maps file to given struct.
|
||||
@@ -277,6 +303,12 @@ func (f *File) MapTo(v interface{}) error {
|
||||
return f.Section("").MapTo(v)
|
||||
}
|
||||
|
||||
// MapTo maps file to given struct in strict mode,
|
||||
// which returns all possible error including value parsing error.
|
||||
func (f *File) StrictMapTo(v interface{}) error {
|
||||
return f.Section("").StrictMapTo(v)
|
||||
}
|
||||
|
||||
// MapTo maps data sources to given struct with name mapper.
|
||||
func MapToWithMapper(v interface{}, mapper NameMapper, source interface{}, others ...interface{}) error {
|
||||
cfg, err := Load(source, others...)
|
||||
@@ -287,11 +319,28 @@ func MapToWithMapper(v interface{}, mapper NameMapper, source interface{}, other
|
||||
return cfg.MapTo(v)
|
||||
}
|
||||
|
||||
// StrictMapToWithMapper maps data sources to given struct with name mapper in strict mode,
|
||||
// which returns all possible error including value parsing error.
|
||||
func StrictMapToWithMapper(v interface{}, mapper NameMapper, source interface{}, others ...interface{}) error {
|
||||
cfg, err := Load(source, others...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
cfg.NameMapper = mapper
|
||||
return cfg.StrictMapTo(v)
|
||||
}
|
||||
|
||||
// MapTo maps data sources to given struct.
|
||||
func MapTo(v, source interface{}, others ...interface{}) error {
|
||||
return MapToWithMapper(v, nil, source, others...)
|
||||
}
|
||||
|
||||
// StrictMapTo maps data sources to given struct in strict mode,
|
||||
// which returns all possible error including value parsing error.
|
||||
func StrictMapTo(v, source interface{}, others ...interface{}) error {
|
||||
return StrictMapToWithMapper(v, nil, source, others...)
|
||||
}
|
||||
|
||||
// reflectSliceWithProperType does the opposite thing as setSliceWithProperType.
|
||||
func reflectSliceWithProperType(key *Key, field reflect.Value, delim string) error {
|
||||
slice := field.Slice(0, field.Len())
|
||||
@@ -359,10 +408,11 @@ func isEmptyValue(v reflect.Value) bool {
|
||||
return v.Uint() == 0
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return v.Float() == 0
|
||||
case reflectTime:
|
||||
return v.Interface().(time.Time).IsZero()
|
||||
case reflect.Interface, reflect.Ptr:
|
||||
return v.IsNil()
|
||||
case reflectTime:
|
||||
t, ok := v.Interface().(time.Time)
|
||||
return ok && t.IsZero()
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
15
vendor/src/github.com/go-ini/ini/struct_test.go
vendored
15
vendor/src/github.com/go-ini/ini/struct_test.go
vendored
@@ -229,6 +229,21 @@ func Test_Struct(t *testing.T) {
|
||||
})
|
||||
})
|
||||
|
||||
Convey("Map to struct in strict mode", t, func() {
|
||||
cfg, err := Load([]byte(`
|
||||
name=bruce
|
||||
age=a30`))
|
||||
So(err, ShouldBeNil)
|
||||
|
||||
type Strict struct {
|
||||
Name string `ini:"name"`
|
||||
Age int `ini:"age"`
|
||||
}
|
||||
s := new(Strict)
|
||||
|
||||
So(cfg.Section("").StrictMapTo(s), ShouldNotBeNil)
|
||||
})
|
||||
|
||||
Convey("Reflect from struct", t, func() {
|
||||
type Embeded struct {
|
||||
Dates []time.Time `delim:"|"`
|
||||
|
||||
5
vendor/src/github.com/kurin/blazer/README.md
vendored
5
vendor/src/github.com/kurin/blazer/README.md
vendored
@@ -77,10 +77,7 @@ Downloading is as simple as uploading:
|
||||
|
||||
```go
|
||||
func downloadFile(ctx context.Context, bucket *b2.Bucket, downloads int, src, dst string) error {
|
||||
r, err := bucket.Object(src).NewReader(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
r := bucket.Object(src).NewReader(ctx)
|
||||
defer r.Close()
|
||||
|
||||
f, err := file.Create(dst)
|
||||
|
||||
16
vendor/src/github.com/kurin/blazer/b2/b2.go
vendored
16
vendor/src/github.com/kurin/blazer/b2/b2.go
vendored
@@ -332,11 +332,9 @@ func (o *Object) Name() string {
|
||||
|
||||
// Attrs returns an object's attributes.
|
||||
func (o *Object) Attrs(ctx context.Context) (*Attrs, error) {
|
||||
f, err := o.b.b.downloadFileByName(ctx, o.name, 0, 1)
|
||||
if err != nil {
|
||||
if err := o.ensure(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
o.f = o.b.b.file(f.id())
|
||||
fi, err := o.f.getFileInfo(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -585,20 +583,14 @@ func (b *Bucket) Reveal(ctx context.Context, name string) error {
|
||||
}
|
||||
|
||||
func (b *Bucket) getObject(ctx context.Context, name string) (*Object, error) {
|
||||
fs, _, err := b.b.listFileNames(ctx, 1, name, "", "")
|
||||
fr, err := b.b.downloadFileByName(ctx, name, 0, 1)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(fs) < 1 {
|
||||
return nil, b2err{err: fmt.Errorf("%s: not found", name), notFoundErr: true}
|
||||
}
|
||||
f := fs[0]
|
||||
if f.name() != name {
|
||||
return nil, b2err{err: fmt.Errorf("%s: not found", name), notFoundErr: true}
|
||||
}
|
||||
fr.Close()
|
||||
return &Object{
|
||||
name: name,
|
||||
f: f,
|
||||
f: b.b.file(fr.id(), name),
|
||||
b: b,
|
||||
}, nil
|
||||
}
|
||||
|
||||
27
vendor/src/github.com/kurin/blazer/b2/b2_test.go
vendored
27
vendor/src/github.com/kurin/blazer/b2/b2_test.go
vendored
@@ -32,10 +32,12 @@ import (
|
||||
|
||||
const (
|
||||
bucketName = "b2-tests"
|
||||
smallFileName = "TeenyTiny"
|
||||
smallFileName = "Teeny Tiny"
|
||||
largeFileName = "BigBytes"
|
||||
)
|
||||
|
||||
var gmux = &sync.Mutex{}
|
||||
|
||||
type testError struct {
|
||||
retry bool
|
||||
backoff time.Duration
|
||||
@@ -167,6 +169,8 @@ func (t *testBucket) startLargeFile(_ context.Context, name, _ string, _ map[str
|
||||
|
||||
func (t *testBucket) listFileNames(ctx context.Context, count int, cont, pfx, del string) ([]b2FileInterface, string, error) {
|
||||
var f []string
|
||||
gmux.Lock()
|
||||
defer gmux.Unlock()
|
||||
for name := range t.files {
|
||||
f = append(f, name)
|
||||
}
|
||||
@@ -196,6 +200,8 @@ func (t *testBucket) listFileVersions(ctx context.Context, count int, a, b, c, d
|
||||
}
|
||||
|
||||
func (t *testBucket) downloadFileByName(_ context.Context, name string, offset, size int64) (b2FileReaderInterface, error) {
|
||||
gmux.Lock()
|
||||
defer gmux.Unlock()
|
||||
f := t.files[name]
|
||||
end := int(offset + size)
|
||||
if end >= len(f) {
|
||||
@@ -215,8 +221,8 @@ func (t *testBucket) hideFile(context.Context, string) (b2FileInterface, error)
|
||||
func (t *testBucket) getDownloadAuthorization(context.Context, string, time.Duration) (string, error) {
|
||||
return "", nil
|
||||
}
|
||||
func (t *testBucket) baseURL() string { return "" }
|
||||
func (t *testBucket) file(id string) b2FileInterface { return nil }
|
||||
func (t *testBucket) baseURL() string { return "" }
|
||||
func (t *testBucket) file(id, name string) b2FileInterface { return nil }
|
||||
|
||||
type testURL struct {
|
||||
files map[string]string
|
||||
@@ -229,6 +235,8 @@ func (t *testURL) uploadFile(_ context.Context, r io.Reader, _ int, name, _, _ s
|
||||
if _, err := io.Copy(buf, r); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
gmux.Lock()
|
||||
defer gmux.Unlock()
|
||||
t.files[name] = buf.String()
|
||||
return &testFile{
|
||||
n: name,
|
||||
@@ -239,7 +247,6 @@ func (t *testURL) uploadFile(_ context.Context, r io.Reader, _ int, name, _, _ s
|
||||
|
||||
type testLargeFile struct {
|
||||
name string
|
||||
mux sync.Mutex
|
||||
parts map[int][]byte
|
||||
files map[string]string
|
||||
errs *errCont
|
||||
@@ -247,6 +254,8 @@ type testLargeFile struct {
|
||||
|
||||
func (t *testLargeFile) finishLargeFile(context.Context) (b2FileInterface, error) {
|
||||
var total []byte
|
||||
gmux.Lock()
|
||||
defer gmux.Unlock()
|
||||
for i := 1; i <= len(t.parts); i++ {
|
||||
total = append(total, t.parts[i]...)
|
||||
}
|
||||
@@ -259,15 +268,15 @@ func (t *testLargeFile) finishLargeFile(context.Context) (b2FileInterface, error
|
||||
}
|
||||
|
||||
func (t *testLargeFile) getUploadPartURL(context.Context) (b2FileChunkInterface, error) {
|
||||
gmux.Lock()
|
||||
defer gmux.Unlock()
|
||||
return &testFileChunk{
|
||||
parts: t.parts,
|
||||
mux: &t.mux,
|
||||
errs: t.errs,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type testFileChunk struct {
|
||||
mux *sync.Mutex
|
||||
parts map[int][]byte
|
||||
errs *errCont
|
||||
}
|
||||
@@ -283,9 +292,9 @@ func (t *testFileChunk) uploadPart(_ context.Context, r io.Reader, _ string, _,
|
||||
if err != nil {
|
||||
return int(i), err
|
||||
}
|
||||
t.mux.Lock()
|
||||
gmux.Lock()
|
||||
defer gmux.Unlock()
|
||||
t.parts[index] = buf.Bytes()
|
||||
t.mux.Unlock()
|
||||
return int(i), nil
|
||||
}
|
||||
|
||||
@@ -315,6 +324,8 @@ func (t *testFile) listParts(context.Context, int, int) ([]b2FilePartInterface,
|
||||
}
|
||||
|
||||
func (t *testFile) deleteFileVersion(context.Context) error {
|
||||
gmux.Lock()
|
||||
defer gmux.Unlock()
|
||||
delete(t.files, t.n)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -54,7 +54,7 @@ type beBucketInterface interface {
|
||||
hideFile(context.Context, string) (beFileInterface, error)
|
||||
getDownloadAuthorization(context.Context, string, time.Duration) (string, error)
|
||||
baseURL() string
|
||||
file(string) beFileInterface
|
||||
file(string, string) beFileInterface
|
||||
}
|
||||
|
||||
type beBucket struct {
|
||||
@@ -407,9 +407,9 @@ func (b *beBucket) baseURL() string {
|
||||
return b.b2bucket.baseURL()
|
||||
}
|
||||
|
||||
func (b *beBucket) file(id string) beFileInterface {
|
||||
func (b *beBucket) file(id, name string) beFileInterface {
|
||||
return &beFile{
|
||||
b2file: b.b2bucket.file(id),
|
||||
b2file: b.b2bucket.file(id, name),
|
||||
ri: b.ri,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -51,7 +51,7 @@ type b2BucketInterface interface {
|
||||
hideFile(context.Context, string) (b2FileInterface, error)
|
||||
getDownloadAuthorization(context.Context, string, time.Duration) (string, error)
|
||||
baseURL() string
|
||||
file(string) b2FileInterface
|
||||
file(string, string) b2FileInterface
|
||||
}
|
||||
|
||||
type b2URLInterface interface {
|
||||
@@ -315,8 +315,12 @@ func (b *b2Bucket) listFileVersions(ctx context.Context, count int, nextName, ne
|
||||
func (b *b2Bucket) downloadFileByName(ctx context.Context, name string, offset, size int64) (b2FileReaderInterface, error) {
|
||||
fr, err := b.b.DownloadFileByName(ctx, name, offset, size)
|
||||
if err != nil {
|
||||
if code, _ := base.Code(err); code == http.StatusRequestedRangeNotSatisfiable {
|
||||
code, _ := base.Code(err)
|
||||
switch code {
|
||||
case http.StatusRequestedRangeNotSatisfiable:
|
||||
return nil, errNoMoreContent
|
||||
case http.StatusNotFound:
|
||||
return nil, b2err{err: err, notFoundErr: true}
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
@@ -339,7 +343,7 @@ func (b *b2Bucket) baseURL() string {
|
||||
return b.b.BaseURL()
|
||||
}
|
||||
|
||||
func (b *b2Bucket) file(id string) b2FileInterface { return &b2File{b.b.File(id)} }
|
||||
func (b *b2Bucket) file(id, name string) b2FileInterface { return &b2File{b.b.File(id, name)} }
|
||||
|
||||
func (b *b2URL) uploadFile(ctx context.Context, r io.Reader, size int, name, contentType, sha1 string, info map[string]string) (b2FileInterface, error) {
|
||||
file, err := b.b.UploadFile(ctx, r, size, name, contentType, sha1, info)
|
||||
@@ -374,6 +378,9 @@ func (b *b2File) status() string {
|
||||
}
|
||||
|
||||
func (b *b2File) getFileInfo(ctx context.Context) (b2FileInfoInterface, error) {
|
||||
if b.b.Info != nil {
|
||||
return &b2FileInfo{b.b.Info}, nil
|
||||
}
|
||||
fi, err := b.b.GetFileInfo(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
||||
@@ -22,6 +22,7 @@ import (
|
||||
"net/http"
|
||||
"os"
|
||||
"reflect"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -219,6 +220,14 @@ func TestAttrs(t *testing.T) {
|
||||
LastModified: time.Unix(1464370149, 142000000),
|
||||
Info: map[string]string{}, // can't be nil
|
||||
},
|
||||
&Attrs{
|
||||
ContentType: "arbitrarystring",
|
||||
Info: map[string]string{
|
||||
"spaces": "string with spaces",
|
||||
"unicode": "日本語",
|
||||
"special": "&/!@_.~",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
table := []struct {
|
||||
@@ -615,6 +624,105 @@ func TestDuelingBuckets(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestNotExist(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
bucket, done := startLiveTest(ctx, t)
|
||||
defer done()
|
||||
|
||||
if _, err := bucket.Object("not there").Attrs(ctx); !IsNotExist(err) {
|
||||
t.Errorf("IsNotExist() on nonexistent object returned false (%v)", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWriteEmpty(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
bucket, done := startLiveTest(ctx, t)
|
||||
defer done()
|
||||
|
||||
_, _, err := writeFile(ctx, bucket, smallFileName, 0, 1e8)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
type rtCounter struct {
|
||||
rt http.RoundTripper
|
||||
trips int
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
func (rt *rtCounter) RoundTrip(r *http.Request) (*http.Response, error) {
|
||||
rt.Lock()
|
||||
defer rt.Unlock()
|
||||
rt.trips++
|
||||
return rt.rt.RoundTrip(r)
|
||||
}
|
||||
|
||||
func TestAttrsNoRoundtrip(t *testing.T) {
|
||||
rt := &rtCounter{rt: transport}
|
||||
transport = rt
|
||||
defer func() {
|
||||
transport = rt.rt
|
||||
}()
|
||||
|
||||
ctx := context.Background()
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
bucket, done := startLiveTest(ctx, t)
|
||||
defer done()
|
||||
|
||||
_, _, err := writeFile(ctx, bucket, smallFileName, 1e6+42, 1e8)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
objs, _, err := bucket.ListObjects(ctx, 1, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(objs) != 1 {
|
||||
t.Fatal("unexpected objects: got %d, want 1", len(objs))
|
||||
}
|
||||
|
||||
trips := rt.trips
|
||||
attrs, err := objs[0].Attrs(ctx)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if attrs.Name != smallFileName {
|
||||
t.Errorf("got the wrong object: got %q, want %q", attrs.Name, smallFileName)
|
||||
}
|
||||
|
||||
if trips != rt.trips {
|
||||
t.Errorf("Attrs() should not have caused any net traffic, but it did: old %d, new %d", trips, rt.trips)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeleteWithoutName(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
bucket, done := startLiveTest(ctx, t)
|
||||
defer done()
|
||||
|
||||
_, _, err := writeFile(ctx, bucket, smallFileName, 1e6+42, 1e8)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := bucket.Object(smallFileName).Delete(ctx); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
type object struct {
|
||||
o *Object
|
||||
err error
|
||||
@@ -655,6 +763,8 @@ func listObjects(ctx context.Context, f func(context.Context, int, *Cursor) ([]*
|
||||
return ch
|
||||
}
|
||||
|
||||
var transport = http.DefaultTransport
|
||||
|
||||
func startLiveTest(ctx context.Context, t *testing.T) (*Bucket, func()) {
|
||||
id := os.Getenv(apiID)
|
||||
key := os.Getenv(apiKey)
|
||||
@@ -662,7 +772,7 @@ func startLiveTest(ctx context.Context, t *testing.T) (*Bucket, func()) {
|
||||
t.Skipf("B2_ACCOUNT_ID or B2_SECRET_KEY unset; skipping integration tests")
|
||||
return nil, nil
|
||||
}
|
||||
client, err := NewClient(ctx, id, key, FailSomeUploads(), ExpireSomeAuthTokens())
|
||||
client, err := NewClient(ctx, id, key, FailSomeUploads(), ExpireSomeAuthTokens(), Transport(transport))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
return nil, nil
|
||||
|
||||
34
vendor/src/github.com/kurin/blazer/b2/writer.go
vendored
34
vendor/src/github.com/kurin/blazer/b2/writer.go
vendored
@@ -64,16 +64,17 @@ type Writer struct {
|
||||
contentType string
|
||||
info map[string]string
|
||||
|
||||
csize int
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
ready chan chunk
|
||||
wg sync.WaitGroup
|
||||
start sync.Once
|
||||
once sync.Once
|
||||
done sync.Once
|
||||
file beLargeFileInterface
|
||||
seen map[int]string
|
||||
csize int
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
ready chan chunk
|
||||
wg sync.WaitGroup
|
||||
start sync.Once
|
||||
once sync.Once
|
||||
done sync.Once
|
||||
file beLargeFileInterface
|
||||
seen map[int]string
|
||||
everStarted bool
|
||||
|
||||
o *Object
|
||||
name string
|
||||
@@ -202,6 +203,7 @@ func (w *Writer) thread() {
|
||||
// Write satisfies the io.Writer interface.
|
||||
func (w *Writer) Write(p []byte) (int, error) {
|
||||
w.start.Do(func() {
|
||||
w.everStarted = true
|
||||
w.smux.Lock()
|
||||
w.smap = make(map[int]*meteredReader)
|
||||
w.smux.Unlock()
|
||||
@@ -362,6 +364,9 @@ func (w *Writer) sendChunk() error {
|
||||
// value of Close on all writers.
|
||||
func (w *Writer) Close() error {
|
||||
w.done.Do(func() {
|
||||
if !w.everStarted {
|
||||
return
|
||||
}
|
||||
defer w.o.b.c.removeWriter(w)
|
||||
defer w.w.Close() // TODO: log error
|
||||
if w.cidx == 0 {
|
||||
@@ -419,16 +424,21 @@ type meteredReader struct {
|
||||
read int64
|
||||
size int
|
||||
r io.ReadSeeker
|
||||
mux sync.Mutex
|
||||
}
|
||||
|
||||
func (mr *meteredReader) Read(p []byte) (int, error) {
|
||||
mr.mux.Lock()
|
||||
defer mr.mux.Unlock()
|
||||
n, err := mr.r.Read(p)
|
||||
atomic.AddInt64(&mr.read, int64(n))
|
||||
mr.read += int64(n)
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (mr *meteredReader) Seek(offset int64, whence int) (int64, error) {
|
||||
atomic.StoreInt64(&mr.read, offset)
|
||||
mr.mux.Lock()
|
||||
defer mr.mux.Unlock()
|
||||
mr.read = offset
|
||||
return mr.r.Seek(offset, whence)
|
||||
}
|
||||
|
||||
|
||||
70
vendor/src/github.com/kurin/blazer/base/base.go
vendored
70
vendor/src/github.com/kurin/blazer/base/base.go
vendored
@@ -277,7 +277,7 @@ func (rb *requestBody) getBody() io.Reader {
|
||||
|
||||
var reqID int64
|
||||
|
||||
func (o *b2Options) makeRequest(ctx context.Context, method, verb, url string, b2req, b2resp interface{}, headers map[string]string, body *requestBody) error {
|
||||
func (o *b2Options) makeRequest(ctx context.Context, method, verb, uri string, b2req, b2resp interface{}, headers map[string]string, body *requestBody) error {
|
||||
var args []byte
|
||||
if b2req != nil {
|
||||
enc, err := json.Marshal(b2req)
|
||||
@@ -290,12 +290,15 @@ func (o *b2Options) makeRequest(ctx context.Context, method, verb, url string, b
|
||||
size: int64(len(enc)),
|
||||
}
|
||||
}
|
||||
req, err := http.NewRequest(verb, url, body.getBody())
|
||||
req, err := http.NewRequest(verb, uri, body.getBody())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
req.ContentLength = body.getSize()
|
||||
for k, v := range headers {
|
||||
if strings.HasPrefix(k, "X-Bz-Info") || strings.HasPrefix(k, "X-Bz-File-Name") {
|
||||
v = escape(v)
|
||||
}
|
||||
req.Header.Set(k, v)
|
||||
}
|
||||
req.Header.Set("X-Blazer-Request-ID", fmt.Sprintf("%d", atomic.AddInt64(&reqID, 1)))
|
||||
@@ -322,6 +325,7 @@ func (o *b2Options) makeRequest(ctx context.Context, method, verb, url string, b
|
||||
}
|
||||
if reply.err != nil {
|
||||
// Connection errors are retryable.
|
||||
blog.V(2).Infof(">> %s uri: %v err: %v", method, req.URL, reply.err)
|
||||
return b2err{
|
||||
msg: reply.err.Error(),
|
||||
retry: 1,
|
||||
@@ -613,20 +617,21 @@ type File struct {
|
||||
Size int64
|
||||
Status string
|
||||
Timestamp time.Time
|
||||
Info *FileInfo
|
||||
id string
|
||||
b2 *B2
|
||||
}
|
||||
|
||||
// File returns a bare File struct, but with the appropriate id and b2
|
||||
// interfaces.
|
||||
func (b *Bucket) File(id string) *File {
|
||||
return &File{id: id, b2: b.b2}
|
||||
func (b *Bucket) File(id, name string) *File {
|
||||
return &File{id: id, b2: b.b2, Name: name}
|
||||
}
|
||||
|
||||
// UploadFile wraps b2_upload_file.
|
||||
func (url *URL) UploadFile(ctx context.Context, r io.Reader, size int, name, contentType, sha1 string, info map[string]string) (*File, error) {
|
||||
func (u *URL) UploadFile(ctx context.Context, r io.Reader, size int, name, contentType, sha1 string, info map[string]string) (*File, error) {
|
||||
headers := map[string]string{
|
||||
"Authorization": url.token,
|
||||
"Authorization": u.token,
|
||||
"X-Bz-File-Name": name,
|
||||
"Content-Type": contentType,
|
||||
"Content-Length": fmt.Sprintf("%d", size),
|
||||
@@ -636,7 +641,7 @@ func (url *URL) UploadFile(ctx context.Context, r io.Reader, size int, name, con
|
||||
headers[fmt.Sprintf("X-Bz-Info-%s", k)] = v
|
||||
}
|
||||
b2resp := &b2types.UploadFileResponse{}
|
||||
if err := url.b2.opts.makeRequest(ctx, "b2_upload_file", "POST", url.uri, nil, b2resp, headers, &requestBody{body: r, size: int64(size)}); err != nil {
|
||||
if err := u.b2.opts.makeRequest(ctx, "b2_upload_file", "POST", u.uri, nil, b2resp, headers, &requestBody{body: r, size: int64(size)}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &File{
|
||||
@@ -645,7 +650,7 @@ func (url *URL) UploadFile(ctx context.Context, r io.Reader, size int, name, con
|
||||
Timestamp: millitime(b2resp.Timestamp),
|
||||
Status: b2resp.Action,
|
||||
id: b2resp.FileID,
|
||||
b2: url.b2,
|
||||
b2: u.b2,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -868,8 +873,17 @@ func (b *Bucket) ListFileNames(ctx context.Context, count int, continuation, pre
|
||||
Size: f.Size,
|
||||
Status: f.Action,
|
||||
Timestamp: millitime(f.Timestamp),
|
||||
id: f.FileID,
|
||||
b2: b.b2,
|
||||
Info: &FileInfo{
|
||||
Name: f.Name,
|
||||
SHA1: f.SHA1,
|
||||
Size: f.Size,
|
||||
ContentType: f.ContentType,
|
||||
Info: f.Info,
|
||||
Status: f.Action,
|
||||
Timestamp: millitime(f.Timestamp),
|
||||
},
|
||||
id: f.FileID,
|
||||
b2: b.b2,
|
||||
})
|
||||
}
|
||||
return files, cont, nil
|
||||
@@ -899,8 +913,17 @@ func (b *Bucket) ListFileVersions(ctx context.Context, count int, startName, sta
|
||||
Size: f.Size,
|
||||
Status: f.Action,
|
||||
Timestamp: millitime(f.Timestamp),
|
||||
id: f.FileID,
|
||||
b2: b.b2,
|
||||
Info: &FileInfo{
|
||||
Name: f.Name,
|
||||
SHA1: f.SHA1,
|
||||
Size: f.Size,
|
||||
ContentType: f.ContentType,
|
||||
Info: f.Info,
|
||||
Status: f.Action,
|
||||
Timestamp: millitime(f.Timestamp),
|
||||
},
|
||||
id: f.FileID,
|
||||
b2: b.b2,
|
||||
})
|
||||
}
|
||||
return files, b2resp.NextName, b2resp.NextID, nil
|
||||
@@ -945,8 +968,8 @@ func mkRange(offset, size int64) string {
|
||||
|
||||
// DownloadFileByName wraps b2_download_file_by_name.
|
||||
func (b *Bucket) DownloadFileByName(ctx context.Context, name string, offset, size int64) (*FileReader, error) {
|
||||
url := fmt.Sprintf("%s/file/%s/%s", b.b2.downloadURI, b.Name, name)
|
||||
req, err := http.NewRequest("GET", url, nil)
|
||||
uri := fmt.Sprintf("%s/file/%s/%s", b.b2.downloadURI, b.Name, name)
|
||||
req, err := http.NewRequest("GET", uri, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -978,6 +1001,7 @@ func (b *Bucket) DownloadFileByName(ctx context.Context, name string, offset, si
|
||||
}
|
||||
clen, err := strconv.ParseInt(reply.resp.Header.Get("Content-Length"), 10, 64)
|
||||
if err != nil {
|
||||
reply.resp.Body.Close()
|
||||
return nil, err
|
||||
}
|
||||
info := make(map[string]string)
|
||||
@@ -985,8 +1009,17 @@ func (b *Bucket) DownloadFileByName(ctx context.Context, name string, offset, si
|
||||
if !strings.HasPrefix(key, "X-Bz-Info-") {
|
||||
continue
|
||||
}
|
||||
name := strings.TrimPrefix(key, "X-Bz-Info-")
|
||||
info[name] = reply.resp.Header.Get(key)
|
||||
name, err := unescape(strings.TrimPrefix(key, "X-Bz-Info-"))
|
||||
if err != nil {
|
||||
reply.resp.Body.Close()
|
||||
return nil, err
|
||||
}
|
||||
val, err := unescape(reply.resp.Header.Get(key))
|
||||
if err != nil {
|
||||
reply.resp.Body.Close()
|
||||
return nil, err
|
||||
}
|
||||
info[name] = val
|
||||
}
|
||||
return &FileReader{
|
||||
ReadCloser: reply.resp.Body,
|
||||
@@ -1046,7 +1079,7 @@ func (f *File) GetFileInfo(ctx context.Context) (*FileInfo, error) {
|
||||
f.Status = b2resp.Action
|
||||
f.Name = b2resp.Name
|
||||
f.Timestamp = millitime(b2resp.Timestamp)
|
||||
return &FileInfo{
|
||||
f.Info = &FileInfo{
|
||||
Name: b2resp.Name,
|
||||
SHA1: b2resp.SHA1,
|
||||
Size: b2resp.Size,
|
||||
@@ -1054,5 +1087,6 @@ func (f *File) GetFileInfo(ctx context.Context) (*FileInfo, error) {
|
||||
Info: b2resp.Info,
|
||||
Status: b2resp.Action,
|
||||
Timestamp: millitime(b2resp.Timestamp),
|
||||
}, nil
|
||||
}
|
||||
return f.Info, nil
|
||||
}
|
||||
|
||||
@@ -17,10 +17,12 @@ package base
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/sha1"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"reflect"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -277,3 +279,140 @@ func compareFileAndInfo(t *testing.T, info *FileInfo, name, sha1 string, imap ma
|
||||
t.Errorf("got %v, want %v", info.Info, imap)
|
||||
}
|
||||
}
|
||||
|
||||
// from https://www.backblaze.com/b2/docs/string_encoding.html
|
||||
var testCases = `[
|
||||
{"fullyEncoded": "%20", "minimallyEncoded": "+", "string": " "},
|
||||
{"fullyEncoded": "%21", "minimallyEncoded": "!", "string": "!"},
|
||||
{"fullyEncoded": "%22", "minimallyEncoded": "%22", "string": "\""},
|
||||
{"fullyEncoded": "%23", "minimallyEncoded": "%23", "string": "#"},
|
||||
{"fullyEncoded": "%24", "minimallyEncoded": "$", "string": "$"},
|
||||
{"fullyEncoded": "%25", "minimallyEncoded": "%25", "string": "%"},
|
||||
{"fullyEncoded": "%26", "minimallyEncoded": "%26", "string": "&"},
|
||||
{"fullyEncoded": "%27", "minimallyEncoded": "'", "string": "'"},
|
||||
{"fullyEncoded": "%28", "minimallyEncoded": "(", "string": "("},
|
||||
{"fullyEncoded": "%29", "minimallyEncoded": ")", "string": ")"},
|
||||
{"fullyEncoded": "%2A", "minimallyEncoded": "*", "string": "*"},
|
||||
{"fullyEncoded": "%2B", "minimallyEncoded": "%2B", "string": "+"},
|
||||
{"fullyEncoded": "%2C", "minimallyEncoded": "%2C", "string": ","},
|
||||
{"fullyEncoded": "%2D", "minimallyEncoded": "-", "string": "-"},
|
||||
{"fullyEncoded": "%2E", "minimallyEncoded": ".", "string": "."},
|
||||
{"fullyEncoded": "/", "minimallyEncoded": "/", "string": "/"},
|
||||
{"fullyEncoded": "%30", "minimallyEncoded": "0", "string": "0"},
|
||||
{"fullyEncoded": "%31", "minimallyEncoded": "1", "string": "1"},
|
||||
{"fullyEncoded": "%32", "minimallyEncoded": "2", "string": "2"},
|
||||
{"fullyEncoded": "%33", "minimallyEncoded": "3", "string": "3"},
|
||||
{"fullyEncoded": "%34", "minimallyEncoded": "4", "string": "4"},
|
||||
{"fullyEncoded": "%35", "minimallyEncoded": "5", "string": "5"},
|
||||
{"fullyEncoded": "%36", "minimallyEncoded": "6", "string": "6"},
|
||||
{"fullyEncoded": "%37", "minimallyEncoded": "7", "string": "7"},
|
||||
{"fullyEncoded": "%38", "minimallyEncoded": "8", "string": "8"},
|
||||
{"fullyEncoded": "%39", "minimallyEncoded": "9", "string": "9"},
|
||||
{"fullyEncoded": "%3A", "minimallyEncoded": ":", "string": ":"},
|
||||
{"fullyEncoded": "%3B", "minimallyEncoded": ";", "string": ";"},
|
||||
{"fullyEncoded": "%3C", "minimallyEncoded": "%3C", "string": "<"},
|
||||
{"fullyEncoded": "%3D", "minimallyEncoded": "=", "string": "="},
|
||||
{"fullyEncoded": "%3E", "minimallyEncoded": "%3E", "string": ">"},
|
||||
{"fullyEncoded": "%3F", "minimallyEncoded": "%3F", "string": "?"},
|
||||
{"fullyEncoded": "%40", "minimallyEncoded": "@", "string": "@"},
|
||||
{"fullyEncoded": "%41", "minimallyEncoded": "A", "string": "A"},
|
||||
{"fullyEncoded": "%42", "minimallyEncoded": "B", "string": "B"},
|
||||
{"fullyEncoded": "%43", "minimallyEncoded": "C", "string": "C"},
|
||||
{"fullyEncoded": "%44", "minimallyEncoded": "D", "string": "D"},
|
||||
{"fullyEncoded": "%45", "minimallyEncoded": "E", "string": "E"},
|
||||
{"fullyEncoded": "%46", "minimallyEncoded": "F", "string": "F"},
|
||||
{"fullyEncoded": "%47", "minimallyEncoded": "G", "string": "G"},
|
||||
{"fullyEncoded": "%48", "minimallyEncoded": "H", "string": "H"},
|
||||
{"fullyEncoded": "%49", "minimallyEncoded": "I", "string": "I"},
|
||||
{"fullyEncoded": "%4A", "minimallyEncoded": "J", "string": "J"},
|
||||
{"fullyEncoded": "%4B", "minimallyEncoded": "K", "string": "K"},
|
||||
{"fullyEncoded": "%4C", "minimallyEncoded": "L", "string": "L"},
|
||||
{"fullyEncoded": "%4D", "minimallyEncoded": "M", "string": "M"},
|
||||
{"fullyEncoded": "%4E", "minimallyEncoded": "N", "string": "N"},
|
||||
{"fullyEncoded": "%4F", "minimallyEncoded": "O", "string": "O"},
|
||||
{"fullyEncoded": "%50", "minimallyEncoded": "P", "string": "P"},
|
||||
{"fullyEncoded": "%51", "minimallyEncoded": "Q", "string": "Q"},
|
||||
{"fullyEncoded": "%52", "minimallyEncoded": "R", "string": "R"},
|
||||
{"fullyEncoded": "%53", "minimallyEncoded": "S", "string": "S"},
|
||||
{"fullyEncoded": "%54", "minimallyEncoded": "T", "string": "T"},
|
||||
{"fullyEncoded": "%55", "minimallyEncoded": "U", "string": "U"},
|
||||
{"fullyEncoded": "%56", "minimallyEncoded": "V", "string": "V"},
|
||||
{"fullyEncoded": "%57", "minimallyEncoded": "W", "string": "W"},
|
||||
{"fullyEncoded": "%58", "minimallyEncoded": "X", "string": "X"},
|
||||
{"fullyEncoded": "%59", "minimallyEncoded": "Y", "string": "Y"},
|
||||
{"fullyEncoded": "%5A", "minimallyEncoded": "Z", "string": "Z"},
|
||||
{"fullyEncoded": "%5B", "minimallyEncoded": "%5B", "string": "["},
|
||||
{"fullyEncoded": "%5C", "minimallyEncoded": "%5C", "string": "\\"},
|
||||
{"fullyEncoded": "%5D", "minimallyEncoded": "%5D", "string": "]"},
|
||||
{"fullyEncoded": "%5E", "minimallyEncoded": "%5E", "string": "^"},
|
||||
{"fullyEncoded": "%5F", "minimallyEncoded": "_", "string": "_"},
|
||||
{"fullyEncoded": "%60", "minimallyEncoded": "%60", "string": "` + "`" + `"},
|
||||
{"fullyEncoded": "%61", "minimallyEncoded": "a", "string": "a"},
|
||||
{"fullyEncoded": "%62", "minimallyEncoded": "b", "string": "b"},
|
||||
{"fullyEncoded": "%63", "minimallyEncoded": "c", "string": "c"},
|
||||
{"fullyEncoded": "%64", "minimallyEncoded": "d", "string": "d"},
|
||||
{"fullyEncoded": "%65", "minimallyEncoded": "e", "string": "e"},
|
||||
{"fullyEncoded": "%66", "minimallyEncoded": "f", "string": "f"},
|
||||
{"fullyEncoded": "%67", "minimallyEncoded": "g", "string": "g"},
|
||||
{"fullyEncoded": "%68", "minimallyEncoded": "h", "string": "h"},
|
||||
{"fullyEncoded": "%69", "minimallyEncoded": "i", "string": "i"},
|
||||
{"fullyEncoded": "%6A", "minimallyEncoded": "j", "string": "j"},
|
||||
{"fullyEncoded": "%6B", "minimallyEncoded": "k", "string": "k"},
|
||||
{"fullyEncoded": "%6C", "minimallyEncoded": "l", "string": "l"},
|
||||
{"fullyEncoded": "%6D", "minimallyEncoded": "m", "string": "m"},
|
||||
{"fullyEncoded": "%6E", "minimallyEncoded": "n", "string": "n"},
|
||||
{"fullyEncoded": "%6F", "minimallyEncoded": "o", "string": "o"},
|
||||
{"fullyEncoded": "%70", "minimallyEncoded": "p", "string": "p"},
|
||||
{"fullyEncoded": "%71", "minimallyEncoded": "q", "string": "q"},
|
||||
{"fullyEncoded": "%72", "minimallyEncoded": "r", "string": "r"},
|
||||
{"fullyEncoded": "%73", "minimallyEncoded": "s", "string": "s"},
|
||||
{"fullyEncoded": "%74", "minimallyEncoded": "t", "string": "t"},
|
||||
{"fullyEncoded": "%75", "minimallyEncoded": "u", "string": "u"},
|
||||
{"fullyEncoded": "%76", "minimallyEncoded": "v", "string": "v"},
|
||||
{"fullyEncoded": "%77", "minimallyEncoded": "w", "string": "w"},
|
||||
{"fullyEncoded": "%78", "minimallyEncoded": "x", "string": "x"},
|
||||
{"fullyEncoded": "%79", "minimallyEncoded": "y", "string": "y"},
|
||||
{"fullyEncoded": "%7A", "minimallyEncoded": "z", "string": "z"},
|
||||
{"fullyEncoded": "%7B", "minimallyEncoded": "%7B", "string": "{"},
|
||||
{"fullyEncoded": "%7C", "minimallyEncoded": "%7C", "string": "|"},
|
||||
{"fullyEncoded": "%7D", "minimallyEncoded": "%7D", "string": "}"},
|
||||
{"fullyEncoded": "%7E", "minimallyEncoded": "~", "string": "~"},
|
||||
{"fullyEncoded": "%7F", "minimallyEncoded": "%7F", "string": "\u007f"},
|
||||
{"fullyEncoded": "%E8%87%AA%E7%94%B1", "minimallyEncoded": "%E8%87%AA%E7%94%B1", "string": "\u81ea\u7531"},
|
||||
{"fullyEncoded": "%F0%90%90%80", "minimallyEncoded": "%F0%90%90%80", "string": "\ud801\udc00"}
|
||||
]`
|
||||
|
||||
type testCase struct {
|
||||
Full string `json:"fullyEncoded"`
|
||||
Min string `json:"minimallyEncoded"`
|
||||
Raw string `json:"string"`
|
||||
}
|
||||
|
||||
func TestEscapes(t *testing.T) {
|
||||
dec := json.NewDecoder(strings.NewReader(testCases))
|
||||
var tcs []testCase
|
||||
if err := dec.Decode(&tcs); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
for _, tc := range tcs {
|
||||
en := escape(tc.Raw)
|
||||
if !(en == tc.Full || en == tc.Min) {
|
||||
t.Errorf("encode %q: got %q, want %q or %q", tc.Raw, en, tc.Min, tc.Full)
|
||||
}
|
||||
|
||||
m, err := unescape(tc.Min)
|
||||
if err != nil {
|
||||
t.Errorf("decode %q: %v", tc.Min, err)
|
||||
}
|
||||
if m != tc.Raw {
|
||||
t.Errorf("decode %q: got %q, want %q", tc.Min, m, tc.Raw)
|
||||
}
|
||||
f, err := unescape(tc.Full)
|
||||
if err != nil {
|
||||
t.Errorf("decode %q: %v", tc.Full, err)
|
||||
}
|
||||
if f != tc.Raw {
|
||||
t.Errorf("decode %q: got %q, want %q", tc.Full, f, tc.Raw)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
81
vendor/src/github.com/kurin/blazer/base/strings.go
vendored
Normal file
81
vendor/src/github.com/kurin/blazer/base/strings.go
vendored
Normal file
@@ -0,0 +1,81 @@
|
||||
// Copyright 2017, Google
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package base
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
func noEscape(c byte) bool {
|
||||
switch c {
|
||||
case '.', '_', '-', '/', '~', '!', '$', '\'', '(', ')', '*', ';', '=', ':', '@':
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func escape(s string) string {
|
||||
// cribbed from url.go, kinda
|
||||
b := &bytes.Buffer{}
|
||||
for i := 0; i < len(s); i++ {
|
||||
switch c := s[i]; {
|
||||
case c == '/':
|
||||
b.WriteByte(c)
|
||||
case 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || '0' <= c && c <= '9':
|
||||
b.WriteByte(c)
|
||||
case noEscape(c):
|
||||
b.WriteByte(c)
|
||||
default:
|
||||
fmt.Fprintf(b, "%%%X", c)
|
||||
}
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func unescape(s string) (string, error) {
|
||||
b := &bytes.Buffer{}
|
||||
for i := 0; i < len(s); i++ {
|
||||
c := s[i]
|
||||
switch c {
|
||||
case '/':
|
||||
b.WriteString("/")
|
||||
case '+':
|
||||
b.WriteString(" ")
|
||||
case '%':
|
||||
if len(s)-i < 3 {
|
||||
return "", errors.New("unescape: bad encoding")
|
||||
}
|
||||
b.WriteByte(unhex(s[i+1])<<4 | unhex(s[i+2]))
|
||||
i += 2
|
||||
default:
|
||||
b.WriteByte(c)
|
||||
}
|
||||
}
|
||||
return b.String(), nil
|
||||
}
|
||||
|
||||
func unhex(c byte) byte {
|
||||
switch {
|
||||
case '0' <= c && c <= '9':
|
||||
return c - '0'
|
||||
case 'a' <= c && c <= 'f':
|
||||
return c - 'a' + 10
|
||||
case 'A' <= c && c <= 'F':
|
||||
return c - 'A' + 10
|
||||
}
|
||||
return 0
|
||||
}
|
||||
134
vendor/src/github.com/kurin/blazer/examples/simple/simple.go
vendored
Normal file
134
vendor/src/github.com/kurin/blazer/examples/simple/simple.go
vendored
Normal file
@@ -0,0 +1,134 @@
|
||||
// Copyright 2017, Google
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// This is a simple program that will copy named files into or out of B2.
|
||||
//
|
||||
// To copy a file into B2:
|
||||
//
|
||||
// B2_ACCOUNT_ID=foo B2_ACCOUNT_KEY=bar simple /path/to/file b2://bucket/path/to/dst
|
||||
//
|
||||
// To copy a file out:
|
||||
//
|
||||
// B2_ACCOUNT_ID=foo B2_ACCOUNT_KEY=bar simple b2://bucket/path/to/file /path/to/dst
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/url"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/kurin/blazer/b2"
|
||||
)
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
b2id := os.Getenv("B2_ACCOUNT_ID")
|
||||
b2key := os.Getenv("B2_ACCOUNT_KEY")
|
||||
|
||||
args := flag.Args()
|
||||
if len(args) != 2 {
|
||||
fmt.Printf("Usage:\n\nsimple [src] [dst]\n")
|
||||
return
|
||||
}
|
||||
src, dst := args[0], args[1]
|
||||
|
||||
ctx := context.Background()
|
||||
c, err := b2.NewClient(ctx, b2id, b2key)
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
return
|
||||
}
|
||||
|
||||
var r io.ReadCloser
|
||||
var w io.WriteCloser
|
||||
|
||||
if strings.HasPrefix(src, "b2://") {
|
||||
reader, err := b2Reader(ctx, c, src)
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
return
|
||||
}
|
||||
r = reader
|
||||
} else {
|
||||
f, err := os.Open(src)
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
return
|
||||
}
|
||||
r = f
|
||||
}
|
||||
// Readers do not need their errors checked on close. (Also it's a little
|
||||
// silly to defer this in main(), but.)
|
||||
defer r.Close()
|
||||
|
||||
if strings.HasPrefix(dst, "b2://") {
|
||||
writer, err := b2Writer(ctx, c, dst)
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
return
|
||||
}
|
||||
w = writer
|
||||
} else {
|
||||
f, err := os.Create(dst)
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
return
|
||||
}
|
||||
w = f
|
||||
}
|
||||
|
||||
// Copy and check error.
|
||||
if _, err := io.Copy(w, r); err != nil {
|
||||
fmt.Println(err)
|
||||
return
|
||||
}
|
||||
|
||||
// It is very important to check the error of the writer.
|
||||
if err := w.Close(); err != nil {
|
||||
fmt.Println(err)
|
||||
}
|
||||
}
|
||||
|
||||
func b2Reader(ctx context.Context, c *b2.Client, path string) (io.ReadCloser, error) {
|
||||
o, err := b2Obj(ctx, c, path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return o.NewReader(ctx), nil
|
||||
}
|
||||
|
||||
func b2Writer(ctx context.Context, c *b2.Client, path string) (io.WriteCloser, error) {
|
||||
o, err := b2Obj(ctx, c, path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return o.NewWriter(ctx), nil
|
||||
}
|
||||
|
||||
func b2Obj(ctx context.Context, c *b2.Client, path string) (*b2.Object, error) {
|
||||
uri, err := url.Parse(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
bucket, err := c.Bucket(ctx, uri.Host)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// B2 paths must not begin with /, so trim it here.
|
||||
return bucket.Object(strings.TrimPrefix(uri.Path, "/")), nil
|
||||
}
|
||||
@@ -171,14 +171,8 @@ type ListFileNamesRequest struct {
|
||||
}
|
||||
|
||||
type ListFileNamesResponse struct {
|
||||
Continuation string `json:"nextFileName"`
|
||||
Files []struct {
|
||||
FileID string `json:"fileId"`
|
||||
Name string `json:"fileName"`
|
||||
Size int64 `json:"size"`
|
||||
Action string `json:"action"`
|
||||
Timestamp int64 `json:"uploadTimestamp"`
|
||||
} `json:"files"`
|
||||
Continuation string `json:"nextFileName"`
|
||||
Files []GetFileInfoResponse `json:"files"`
|
||||
}
|
||||
|
||||
type ListFileVersionsRequest struct {
|
||||
@@ -191,15 +185,9 @@ type ListFileVersionsRequest struct {
|
||||
}
|
||||
|
||||
type ListFileVersionsResponse struct {
|
||||
NextName string `json:"nextFileName"`
|
||||
NextID string `json:"nextFileId"`
|
||||
Files []struct {
|
||||
FileID string `json:"fileId"`
|
||||
Name string `json:"fileName"`
|
||||
Size int64 `json:"size"`
|
||||
Action string `json:"action"`
|
||||
Timestamp int64 `json:"uploadTimestamp"`
|
||||
} `json:"files"`
|
||||
NextName string `json:"nextFileName"`
|
||||
NextID string `json:"nextFileId"`
|
||||
Files []GetFileInfoResponse `json:"files"`
|
||||
}
|
||||
|
||||
type HideFileRequest struct {
|
||||
@@ -218,6 +206,7 @@ type GetFileInfoRequest struct {
|
||||
}
|
||||
|
||||
type GetFileInfoResponse struct {
|
||||
FileID string `json:"fileId"`
|
||||
Name string `json:"fileName"`
|
||||
SHA1 string `json:"contentSha1"`
|
||||
Size int64 `json:"contentLength"`
|
||||
|
||||
@@ -34,7 +34,7 @@ func dir() (string, error) {
|
||||
cmd.Stdout = &stdout
|
||||
if err := cmd.Run(); err != nil {
|
||||
// If "getent" is missing, ignore it
|
||||
if err == exec.ErrNotFound {
|
||||
if err != exec.ErrNotFound {
|
||||
return "", err
|
||||
}
|
||||
} else {
|
||||
|
||||
@@ -10,6 +10,10 @@ import (
|
||||
|
||||
// dir returns the homedir of current user for MS Windows OS.
|
||||
func dir() (string, error) {
|
||||
// First prefer the HOME environmental variable
|
||||
if home := os.Getenv("HOME"); home != "" {
|
||||
return home, nil
|
||||
}
|
||||
drive := os.Getenv("HOMEDRIVE")
|
||||
path := os.Getenv("HOMEPATH")
|
||||
home := drive + path
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
package homedir
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"os/user"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
@@ -66,7 +66,7 @@ func TestExpand(t *testing.T) {
|
||||
|
||||
{
|
||||
"~/foo",
|
||||
fmt.Sprintf("%s/foo", u.HomeDir),
|
||||
filepath.Join(u.HomeDir, "foo"),
|
||||
false,
|
||||
},
|
||||
|
||||
@@ -103,12 +103,12 @@ func TestExpand(t *testing.T) {
|
||||
DisableCache = true
|
||||
defer func() { DisableCache = false }()
|
||||
defer patchEnv("HOME", "/custom/path/")()
|
||||
expected := "/custom/path/foo/bar"
|
||||
expected := filepath.Join("/", "custom", "path", "foo/bar")
|
||||
actual, err := Expand("~/foo/bar")
|
||||
|
||||
if err != nil {
|
||||
t.Errorf("No error is expected, got: %v", err)
|
||||
} else if actual != "/custom/path/foo/bar" {
|
||||
} else if actual != expected {
|
||||
t.Errorf("Expected: %v; actual: %v", expected, actual)
|
||||
}
|
||||
}
|
||||
|
||||
532
vendor/src/github.com/minio/minio-go/api-compose-object.go
vendored
Normal file
532
vendor/src/github.com/minio/minio-go/api-compose-object.go
vendored
Normal file
@@ -0,0 +1,532 @@
|
||||
/*
|
||||
* Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2017 Minio, Inc.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package minio
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/minio/minio-go/pkg/s3utils"
|
||||
)
|
||||
|
||||
// SSEInfo - represents Server-Side-Encryption parameters specified by
|
||||
// a user.
|
||||
type SSEInfo struct {
|
||||
key []byte
|
||||
algo string
|
||||
}
|
||||
|
||||
// NewSSEInfo - specifies (binary or un-encoded) encryption key and
|
||||
// algorithm name. If algo is empty, it defaults to "AES256". Ref:
|
||||
// https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
|
||||
func NewSSEInfo(key []byte, algo string) SSEInfo {
|
||||
if algo == "" {
|
||||
algo = "AES256"
|
||||
}
|
||||
return SSEInfo{key, algo}
|
||||
}
|
||||
|
||||
// internal method that computes SSE-C headers
|
||||
func (s *SSEInfo) getSSEHeaders(isCopySource bool) map[string]string {
|
||||
if s == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
cs := ""
|
||||
if isCopySource {
|
||||
cs = "copy-source-"
|
||||
}
|
||||
return map[string]string{
|
||||
"x-amz-" + cs + "server-side-encryption-customer-algorithm": s.algo,
|
||||
"x-amz-" + cs + "server-side-encryption-customer-key": base64.StdEncoding.EncodeToString(s.key),
|
||||
"x-amz-" + cs + "server-side-encryption-customer-key-MD5": base64.StdEncoding.EncodeToString(sumMD5(s.key)),
|
||||
}
|
||||
}
|
||||
|
||||
// GetSSEHeaders - computes and returns headers for SSE-C as key-value
|
||||
// pairs. They can be set as metadata in PutObject* requests (for
|
||||
// encryption) or be set as request headers in `Core.GetObject` (for
|
||||
// decryption).
|
||||
func (s *SSEInfo) GetSSEHeaders() map[string]string {
|
||||
return s.getSSEHeaders(false)
|
||||
}
|
||||
|
||||
// DestinationInfo - type with information about the object to be
|
||||
// created via server-side copy requests, using the Compose API.
|
||||
type DestinationInfo struct {
|
||||
bucket, object string
|
||||
|
||||
// key for encrypting destination
|
||||
encryption *SSEInfo
|
||||
|
||||
// if no user-metadata is provided, it is copied from source
|
||||
// (when there is only once source object in the compose
|
||||
// request)
|
||||
userMetadata map[string]string
|
||||
}
|
||||
|
||||
// NewDestinationInfo - creates a compose-object/copy-source
|
||||
// destination info object.
|
||||
//
|
||||
// `encSSEC` is the key info for server-side-encryption with customer
|
||||
// provided key. If it is nil, no encryption is performed.
|
||||
//
|
||||
// `userMeta` is the user-metadata key-value pairs to be set on the
|
||||
// destination. The keys are automatically prefixed with `x-amz-meta-`
|
||||
// if needed. If nil is passed, and if only a single source (of any
|
||||
// size) is provided in the ComposeObject call, then metadata from the
|
||||
// source is copied to the destination.
|
||||
func NewDestinationInfo(bucket, object string, encryptSSEC *SSEInfo,
|
||||
userMeta map[string]string) (d DestinationInfo, err error) {
|
||||
|
||||
// Input validation.
|
||||
if err = s3utils.CheckValidBucketName(bucket); err != nil {
|
||||
return d, err
|
||||
}
|
||||
if err = s3utils.CheckValidObjectName(object); err != nil {
|
||||
return d, err
|
||||
}
|
||||
|
||||
// Process custom-metadata to remove a `x-amz-meta-` prefix if
|
||||
// present and validate that keys are distinct (after this
|
||||
// prefix removal).
|
||||
m := make(map[string]string)
|
||||
for k, v := range userMeta {
|
||||
if strings.HasPrefix(strings.ToLower(k), "x-amz-meta-") {
|
||||
k = k[len("x-amz-meta-"):]
|
||||
}
|
||||
if _, ok := m[k]; ok {
|
||||
return d, fmt.Errorf("Cannot add both %s and x-amz-meta-%s keys as custom metadata", k, k)
|
||||
}
|
||||
m[k] = v
|
||||
}
|
||||
|
||||
return DestinationInfo{
|
||||
bucket: bucket,
|
||||
object: object,
|
||||
encryption: encryptSSEC,
|
||||
userMetadata: m,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// getUserMetaHeadersMap - construct appropriate key-value pairs to send
|
||||
// as headers from metadata map to pass into copy-object request. For
|
||||
// single part copy-object (i.e. non-multipart object), enable the
|
||||
// withCopyDirectiveHeader to set the `x-amz-metadata-directive` to
|
||||
// `REPLACE`, so that metadata headers from the source are not copied
|
||||
// over.
|
||||
func (d *DestinationInfo) getUserMetaHeadersMap(withCopyDirectiveHeader bool) map[string]string {
|
||||
if len(d.userMetadata) == 0 {
|
||||
return nil
|
||||
}
|
||||
r := make(map[string]string)
|
||||
if withCopyDirectiveHeader {
|
||||
r["x-amz-metadata-directive"] = "REPLACE"
|
||||
}
|
||||
for k, v := range d.userMetadata {
|
||||
r["x-amz-meta-"+k] = v
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
// SourceInfo - represents a source object to be copied, using
|
||||
// server-side copying APIs.
|
||||
type SourceInfo struct {
|
||||
bucket, object string
|
||||
|
||||
start, end int64
|
||||
|
||||
decryptKey *SSEInfo
|
||||
// Headers to send with the upload-part-copy request involving
|
||||
// this source object.
|
||||
Headers http.Header
|
||||
}
|
||||
|
||||
// NewSourceInfo - create a compose-object/copy-object source info
|
||||
// object.
|
||||
//
|
||||
// `decryptSSEC` is the decryption key using server-side-encryption
|
||||
// with customer provided key. It may be nil if the source is not
|
||||
// encrypted.
|
||||
func NewSourceInfo(bucket, object string, decryptSSEC *SSEInfo) SourceInfo {
|
||||
r := SourceInfo{
|
||||
bucket: bucket,
|
||||
object: object,
|
||||
start: -1, // range is unspecified by default
|
||||
decryptKey: decryptSSEC,
|
||||
Headers: make(http.Header),
|
||||
}
|
||||
|
||||
// Set the source header
|
||||
r.Headers.Set("x-amz-copy-source", s3utils.EncodePath(bucket+"/"+object))
|
||||
|
||||
// Assemble decryption headers for upload-part-copy request
|
||||
for k, v := range decryptSSEC.getSSEHeaders(true) {
|
||||
r.Headers.Set(k, v)
|
||||
}
|
||||
|
||||
return r
|
||||
}
|
||||
|
||||
// SetRange - Set the start and end offset of the source object to be
|
||||
// copied. If this method is not called, the whole source object is
|
||||
// copied.
|
||||
func (s *SourceInfo) SetRange(start, end int64) error {
|
||||
if start > end || start < 0 {
|
||||
return ErrInvalidArgument("start must be non-negative, and start must be at most end.")
|
||||
}
|
||||
// Note that 0 <= start <= end
|
||||
s.start, s.end = start, end
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetMatchETagCond - Set ETag match condition. The object is copied
|
||||
// only if the etag of the source matches the value given here.
|
||||
func (s *SourceInfo) SetMatchETagCond(etag string) error {
|
||||
if etag == "" {
|
||||
return ErrInvalidArgument("ETag cannot be empty.")
|
||||
}
|
||||
s.Headers.Set("x-amz-copy-source-if-match", etag)
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetMatchETagExceptCond - Set the ETag match exception
|
||||
// condition. The object is copied only if the etag of the source is
|
||||
// not the value given here.
|
||||
func (s *SourceInfo) SetMatchETagExceptCond(etag string) error {
|
||||
if etag == "" {
|
||||
return ErrInvalidArgument("ETag cannot be empty.")
|
||||
}
|
||||
s.Headers.Set("x-amz-copy-source-if-none-match", etag)
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetModifiedSinceCond - Set the modified since condition.
|
||||
func (s *SourceInfo) SetModifiedSinceCond(modTime time.Time) error {
|
||||
if modTime.IsZero() {
|
||||
return ErrInvalidArgument("Input time cannot be 0.")
|
||||
}
|
||||
s.Headers.Set("x-amz-copy-source-if-modified-since", modTime.Format(http.TimeFormat))
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetUnmodifiedSinceCond - Set the unmodified since condition.
|
||||
func (s *SourceInfo) SetUnmodifiedSinceCond(modTime time.Time) error {
|
||||
if modTime.IsZero() {
|
||||
return ErrInvalidArgument("Input time cannot be 0.")
|
||||
}
|
||||
s.Headers.Set("x-amz-copy-source-if-unmodified-since", modTime.Format(http.TimeFormat))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Helper to fetch size and etag of an object using a StatObject call.
|
||||
func (s *SourceInfo) getProps(c Client) (size int64, etag string, userMeta map[string]string, err error) {
|
||||
// Get object info - need size and etag here. Also, decryption
|
||||
// headers are added to the stat request if given.
|
||||
var objInfo ObjectInfo
|
||||
rh := NewGetReqHeaders()
|
||||
for k, v := range s.decryptKey.getSSEHeaders(false) {
|
||||
rh.Set(k, v)
|
||||
}
|
||||
objInfo, err = c.statObject(s.bucket, s.object, rh)
|
||||
if err != nil {
|
||||
err = fmt.Errorf("Could not stat object - %s/%s: %v", s.bucket, s.object, err)
|
||||
} else {
|
||||
size = objInfo.Size
|
||||
etag = objInfo.ETag
|
||||
userMeta = make(map[string]string)
|
||||
for k, v := range objInfo.Metadata {
|
||||
if strings.HasPrefix(k, "x-amz-meta-") {
|
||||
if len(v) > 0 {
|
||||
userMeta[k] = v[0]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// uploadPartCopy - helper function to create a part in a multipart
|
||||
// upload via an upload-part-copy request
|
||||
// https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
|
||||
func (c Client) uploadPartCopy(bucket, object, uploadID string, partNumber int,
|
||||
headers http.Header) (p CompletePart, err error) {
|
||||
|
||||
// Build query parameters
|
||||
urlValues := make(url.Values)
|
||||
urlValues.Set("partNumber", strconv.Itoa(partNumber))
|
||||
urlValues.Set("uploadId", uploadID)
|
||||
|
||||
// Send upload-part-copy request
|
||||
resp, err := c.executeMethod("PUT", requestMetadata{
|
||||
bucketName: bucket,
|
||||
objectName: object,
|
||||
customHeader: headers,
|
||||
queryValues: urlValues,
|
||||
})
|
||||
defer closeResponse(resp)
|
||||
if err != nil {
|
||||
return p, err
|
||||
}
|
||||
|
||||
// Check if we got an error response.
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return p, httpRespToErrorResponse(resp, bucket, object)
|
||||
}
|
||||
|
||||
// Decode copy-part response on success.
|
||||
cpObjRes := copyObjectResult{}
|
||||
err = xmlDecoder(resp.Body, &cpObjRes)
|
||||
if err != nil {
|
||||
return p, err
|
||||
}
|
||||
p.PartNumber, p.ETag = partNumber, cpObjRes.ETag
|
||||
return p, nil
|
||||
}
|
||||
|
||||
// ComposeObject - creates an object using server-side copying of
|
||||
// existing objects. It takes a list of source objects (with optional
|
||||
// offsets) and concatenates them into a new object using only
|
||||
// server-side copying operations.
|
||||
func (c Client) ComposeObject(dst DestinationInfo, srcs []SourceInfo) error {
|
||||
if len(srcs) < 1 || len(srcs) > maxPartsCount {
|
||||
return ErrInvalidArgument("There must be as least one and upto 10000 source objects.")
|
||||
}
|
||||
|
||||
srcSizes := make([]int64, len(srcs))
|
||||
var totalSize, size, totalParts int64
|
||||
var srcUserMeta map[string]string
|
||||
var etag string
|
||||
var err error
|
||||
for i, src := range srcs {
|
||||
size, etag, srcUserMeta, err = src.getProps(c)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Could not get source props for %s/%s: %v", src.bucket, src.object, err)
|
||||
}
|
||||
|
||||
// Error out if client side encryption is used in this source object when
|
||||
// more than one source objects are given.
|
||||
if len(srcs) > 1 && src.Headers.Get("x-amz-meta-x-amz-key") != "" {
|
||||
return ErrInvalidArgument(
|
||||
fmt.Sprintf("Client side encryption is used in source object %s/%s", src.bucket, src.object))
|
||||
}
|
||||
|
||||
// Since we did a HEAD to get size, we use the ETag
|
||||
// value to make sure the object has not changed by
|
||||
// the time we perform the copy. This is done, only if
|
||||
// the user has not set their own ETag match
|
||||
// condition.
|
||||
if src.Headers.Get("x-amz-copy-source-if-match") == "" {
|
||||
src.SetMatchETagCond(etag)
|
||||
}
|
||||
|
||||
// Check if a segment is specified, and if so, is the
|
||||
// segment within object bounds?
|
||||
if src.start != -1 {
|
||||
// Since range is specified,
|
||||
// 0 <= src.start <= src.end
|
||||
// so only invalid case to check is:
|
||||
if src.end >= size {
|
||||
return ErrInvalidArgument(
|
||||
fmt.Sprintf("SourceInfo %d has invalid segment-to-copy [%d, %d] (size is %d)",
|
||||
i, src.start, src.end, size))
|
||||
}
|
||||
size = src.end - src.start + 1
|
||||
}
|
||||
|
||||
// Only the last source may be less than `absMinPartSize`
|
||||
if size < absMinPartSize && i < len(srcs)-1 {
|
||||
return ErrInvalidArgument(
|
||||
fmt.Sprintf("SourceInfo %d is too small (%d) and it is not the last part", i, size))
|
||||
}
|
||||
|
||||
// Is data to copy too large?
|
||||
totalSize += size
|
||||
if totalSize > maxMultipartPutObjectSize {
|
||||
return ErrInvalidArgument(fmt.Sprintf("Cannot compose an object of size %d (> 5TiB)", totalSize))
|
||||
}
|
||||
|
||||
// record source size
|
||||
srcSizes[i] = size
|
||||
|
||||
// calculate parts needed for current source
|
||||
totalParts += partsRequired(size)
|
||||
// Do we need more parts than we are allowed?
|
||||
if totalParts > maxPartsCount {
|
||||
return ErrInvalidArgument(fmt.Sprintf(
|
||||
"Your proposed compose object requires more than %d parts", maxPartsCount))
|
||||
}
|
||||
}
|
||||
|
||||
// Single source object case (i.e. when only one source is
|
||||
// involved, it is being copied wholly and at most 5GiB in
|
||||
// size).
|
||||
if totalParts == 1 && srcs[0].start == -1 && totalSize <= maxPartSize {
|
||||
h := srcs[0].Headers
|
||||
// Add destination encryption headers
|
||||
for k, v := range dst.encryption.getSSEHeaders(false) {
|
||||
h.Set(k, v)
|
||||
}
|
||||
|
||||
// If no user metadata is specified (and so, the
|
||||
// for-loop below is not entered), metadata from the
|
||||
// source is copied to the destination (due to
|
||||
// single-part copy-object PUT request behaviour).
|
||||
for k, v := range dst.getUserMetaHeadersMap(true) {
|
||||
h.Set(k, v)
|
||||
}
|
||||
|
||||
// Send copy request
|
||||
resp, err := c.executeMethod("PUT", requestMetadata{
|
||||
bucketName: dst.bucket,
|
||||
objectName: dst.object,
|
||||
customHeader: h,
|
||||
})
|
||||
defer closeResponse(resp)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Check if we got an error response.
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return httpRespToErrorResponse(resp, dst.bucket, dst.object)
|
||||
}
|
||||
|
||||
// Return nil on success.
|
||||
return nil
|
||||
}
|
||||
|
||||
// Now, handle multipart-copy cases.
|
||||
|
||||
// 1. Initiate a new multipart upload.
|
||||
|
||||
// Set user-metadata on the destination object. If no
|
||||
// user-metadata is specified, and there is only one source,
|
||||
// (only) then metadata from source is copied.
|
||||
userMeta := dst.getUserMetaHeadersMap(false)
|
||||
metaMap := userMeta
|
||||
if len(userMeta) == 0 && len(srcs) == 1 {
|
||||
metaMap = srcUserMeta
|
||||
}
|
||||
metaHeaders := make(map[string][]string)
|
||||
for k, v := range metaMap {
|
||||
metaHeaders[k] = append(metaHeaders[k], v)
|
||||
}
|
||||
uploadID, err := c.newUploadID(dst.bucket, dst.object, metaHeaders)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error creating new upload: %v", err)
|
||||
}
|
||||
|
||||
// 2. Perform copy part uploads
|
||||
objParts := []CompletePart{}
|
||||
partIndex := 1
|
||||
for i, src := range srcs {
|
||||
h := src.Headers
|
||||
// Add destination encryption headers
|
||||
for k, v := range dst.encryption.getSSEHeaders(false) {
|
||||
h.Set(k, v)
|
||||
}
|
||||
|
||||
// calculate start/end indices of parts after
|
||||
// splitting.
|
||||
startIdx, endIdx := calculateEvenSplits(srcSizes[i], src)
|
||||
for j, start := range startIdx {
|
||||
end := endIdx[j]
|
||||
|
||||
// Add (or reset) source range header for
|
||||
// upload part copy request.
|
||||
h.Set("x-amz-copy-source-range",
|
||||
fmt.Sprintf("bytes=%d-%d", start, end))
|
||||
|
||||
// make upload-part-copy request
|
||||
complPart, err := c.uploadPartCopy(dst.bucket,
|
||||
dst.object, uploadID, partIndex, h)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error in upload-part-copy - %v", err)
|
||||
}
|
||||
objParts = append(objParts, complPart)
|
||||
partIndex++
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Make final complete-multipart request.
|
||||
_, err = c.completeMultipartUpload(dst.bucket, dst.object, uploadID,
|
||||
completeMultipartUpload{Parts: objParts})
|
||||
if err != nil {
|
||||
err = fmt.Errorf("Error in complete-multipart request - %v", err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// partsRequired is ceiling(size / copyPartSize)
|
||||
func partsRequired(size int64) int64 {
|
||||
r := size / copyPartSize
|
||||
if size%copyPartSize > 0 {
|
||||
r++
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
// calculateEvenSplits - computes splits for a source and returns
|
||||
// start and end index slices. Splits happen evenly to be sure that no
|
||||
// part is less than 5MiB, as that could fail the multipart request if
|
||||
// it is not the last part.
|
||||
func calculateEvenSplits(size int64, src SourceInfo) (startIndex, endIndex []int64) {
|
||||
if size == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
reqParts := partsRequired(size)
|
||||
startIndex = make([]int64, reqParts)
|
||||
endIndex = make([]int64, reqParts)
|
||||
// Compute number of required parts `k`, as:
|
||||
//
|
||||
// k = ceiling(size / copyPartSize)
|
||||
//
|
||||
// Now, distribute the `size` bytes in the source into
|
||||
// k parts as evenly as possible:
|
||||
//
|
||||
// r parts sized (q+1) bytes, and
|
||||
// (k - r) parts sized q bytes, where
|
||||
//
|
||||
// size = q * k + r (by simple division of size by k,
|
||||
// so that 0 <= r < k)
|
||||
//
|
||||
start := src.start
|
||||
if start == -1 {
|
||||
start = 0
|
||||
}
|
||||
quot, rem := size/reqParts, size%reqParts
|
||||
nextStart := start
|
||||
for j := int64(0); j < reqParts; j++ {
|
||||
curPartSize := quot
|
||||
if j < rem {
|
||||
curPartSize++
|
||||
}
|
||||
|
||||
cStart := nextStart
|
||||
cEnd := cStart + curPartSize - 1
|
||||
nextStart = cEnd + 1
|
||||
|
||||
startIndex[j], endIndex[j] = cStart, cEnd
|
||||
}
|
||||
return
|
||||
}
|
||||
88
vendor/src/github.com/minio/minio-go/api-compose-object_test.go
vendored
Normal file
88
vendor/src/github.com/minio/minio-go/api-compose-object_test.go
vendored
Normal file
@@ -0,0 +1,88 @@
|
||||
/*
|
||||
* Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2017 Minio, Inc.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
package minio
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
const (
|
||||
gb1 = 1024 * 1024 * 1024
|
||||
gb5 = 5 * gb1
|
||||
gb5p1 = gb5 + 1
|
||||
gb10p1 = 2*gb5 + 1
|
||||
gb10p2 = 2*gb5 + 2
|
||||
)
|
||||
|
||||
func TestPartsRequired(t *testing.T) {
|
||||
testCases := []struct {
|
||||
size, ref int64
|
||||
}{
|
||||
{0, 0},
|
||||
{1, 1},
|
||||
{gb5, 1},
|
||||
{2 * gb5, 2},
|
||||
{gb10p1, 3},
|
||||
{gb10p2, 3},
|
||||
}
|
||||
|
||||
for i, testCase := range testCases {
|
||||
res := partsRequired(testCase.size)
|
||||
if res != testCase.ref {
|
||||
t.Errorf("Test %d - output did not match with reference results", i+1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCalculateEvenSplits(t *testing.T) {
|
||||
|
||||
testCases := []struct {
|
||||
// input size and source object
|
||||
size int64
|
||||
src SourceInfo
|
||||
|
||||
// output part-indexes
|
||||
starts, ends []int64
|
||||
}{
|
||||
{0, SourceInfo{start: -1}, nil, nil},
|
||||
{1, SourceInfo{start: -1}, []int64{0}, []int64{0}},
|
||||
{1, SourceInfo{start: 0}, []int64{0}, []int64{0}},
|
||||
|
||||
{gb1, SourceInfo{start: -1}, []int64{0}, []int64{gb1 - 1}},
|
||||
{gb5, SourceInfo{start: -1}, []int64{0}, []int64{gb5 - 1}},
|
||||
|
||||
// 2 part splits
|
||||
{gb5p1, SourceInfo{start: -1}, []int64{0, gb5/2 + 1}, []int64{gb5 / 2, gb5}},
|
||||
{gb5p1, SourceInfo{start: -1}, []int64{0, gb5/2 + 1}, []int64{gb5 / 2, gb5}},
|
||||
|
||||
// 3 part splits
|
||||
{gb10p1, SourceInfo{start: -1},
|
||||
[]int64{0, gb10p1/3 + 1, 2*gb10p1/3 + 1},
|
||||
[]int64{gb10p1 / 3, 2 * gb10p1 / 3, gb10p1 - 1}},
|
||||
|
||||
{gb10p2, SourceInfo{start: -1},
|
||||
[]int64{0, gb10p2 / 3, 2 * gb10p2 / 3},
|
||||
[]int64{gb10p2/3 - 1, 2*gb10p2/3 - 1, gb10p2 - 1}},
|
||||
}
|
||||
|
||||
for i, testCase := range testCases {
|
||||
resStart, resEnd := calculateEvenSplits(testCase.size, testCase.src)
|
||||
if !reflect.DeepEqual(testCase.starts, resStart) || !reflect.DeepEqual(testCase.ends, resEnd) {
|
||||
t.Errorf("Test %d - output did not match with reference results", i+1)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -17,10 +17,8 @@
|
||||
package minio
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"hash"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"math"
|
||||
"os"
|
||||
|
||||
@@ -78,55 +76,6 @@ func optimalPartInfo(objectSize int64) (totalPartsCount int, partSize int64, las
|
||||
return totalPartsCount, partSize, lastPartSize, nil
|
||||
}
|
||||
|
||||
// hashCopyBuffer is identical to hashCopyN except that it doesn't take
|
||||
// any size argument but takes a buffer argument and reader should be
|
||||
// of io.ReaderAt interface.
|
||||
//
|
||||
// Stages reads from offsets into the buffer, if buffer is nil it is
|
||||
// initialized to optimalBufferSize.
|
||||
func hashCopyBuffer(hashAlgorithms map[string]hash.Hash, hashSums map[string][]byte, writer io.Writer, reader io.ReaderAt, buf []byte) (size int64, err error) {
|
||||
hashWriter := writer
|
||||
for _, v := range hashAlgorithms {
|
||||
hashWriter = io.MultiWriter(hashWriter, v)
|
||||
}
|
||||
|
||||
// Buffer is nil, initialize.
|
||||
if buf == nil {
|
||||
buf = make([]byte, optimalReadBufferSize)
|
||||
}
|
||||
|
||||
// Offset to start reading from.
|
||||
var readAtOffset int64
|
||||
|
||||
// Following block reads data at an offset from the input
|
||||
// reader and copies data to into local temporary file.
|
||||
for {
|
||||
readAtSize, rerr := reader.ReadAt(buf, readAtOffset)
|
||||
if rerr != nil {
|
||||
if rerr != io.EOF {
|
||||
return 0, rerr
|
||||
}
|
||||
}
|
||||
writeSize, werr := hashWriter.Write(buf[:readAtSize])
|
||||
if werr != nil {
|
||||
return 0, werr
|
||||
}
|
||||
if readAtSize != writeSize {
|
||||
return 0, fmt.Errorf("Read size was not completely written to writer. wanted %d, got %d - %s", readAtSize, writeSize, reportIssue)
|
||||
}
|
||||
readAtOffset += int64(writeSize)
|
||||
size += int64(writeSize)
|
||||
if rerr == io.EOF {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
for k, v := range hashAlgorithms {
|
||||
hashSums[k] = v.Sum(nil)
|
||||
}
|
||||
return size, err
|
||||
}
|
||||
|
||||
// hashCopyN - Calculates chosen hashes up to partSize amount of bytes.
|
||||
func hashCopyN(hashAlgorithms map[string]hash.Hash, hashSums map[string][]byte, writer io.Writer, reader io.Reader, partSize int64) (size int64, err error) {
|
||||
hashWriter := writer
|
||||
@@ -167,27 +116,3 @@ func (c Client) newUploadID(bucketName, objectName string, metaData map[string][
|
||||
}
|
||||
return initMultipartUploadResult.UploadID, nil
|
||||
}
|
||||
|
||||
// computeHash - Calculates hashes for an input read Seeker.
|
||||
func computeHash(hashAlgorithms map[string]hash.Hash, hashSums map[string][]byte, reader io.ReadSeeker) (size int64, err error) {
|
||||
hashWriter := ioutil.Discard
|
||||
for _, v := range hashAlgorithms {
|
||||
hashWriter = io.MultiWriter(hashWriter, v)
|
||||
}
|
||||
|
||||
// If no buffer is provided, no need to allocate just use io.Copy.
|
||||
size, err = io.Copy(hashWriter, reader)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Seek back reader to the beginning location.
|
||||
if _, err := reader.Seek(0, 0); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
for k, v := range hashAlgorithms {
|
||||
hashSums[k] = v.Sum(nil)
|
||||
}
|
||||
return size, nil
|
||||
}
|
||||
|
||||
@@ -16,57 +16,7 @@
|
||||
|
||||
package minio
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/minio/minio-go/pkg/s3utils"
|
||||
)
|
||||
|
||||
// CopyObject - copy a source object into a new object with the provided name in the provided bucket
|
||||
func (c Client) CopyObject(bucketName string, objectName string, objectSource string, cpCond CopyConditions) error {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return err
|
||||
}
|
||||
if objectSource == "" {
|
||||
return ErrInvalidArgument("Object source cannot be empty.")
|
||||
}
|
||||
|
||||
// customHeaders apply headers.
|
||||
customHeaders := make(http.Header)
|
||||
for _, cond := range cpCond.conditions {
|
||||
customHeaders.Set(cond.key, cond.value)
|
||||
}
|
||||
|
||||
// Set copy source.
|
||||
customHeaders.Set("x-amz-copy-source", s3utils.EncodePath(objectSource))
|
||||
|
||||
// Execute PUT on objectName.
|
||||
resp, err := c.executeMethod("PUT", requestMetadata{
|
||||
bucketName: bucketName,
|
||||
objectName: objectName,
|
||||
customHeader: customHeaders,
|
||||
})
|
||||
defer closeResponse(resp)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if resp != nil {
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return httpRespToErrorResponse(resp, bucketName, objectName)
|
||||
}
|
||||
}
|
||||
|
||||
// Decode copy response on success.
|
||||
cpObjRes := copyObjectResult{}
|
||||
err = xmlDecoder(resp.Body, &cpObjRes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Return nil on success.
|
||||
return nil
|
||||
// CopyObject - copy a source object into a new object
|
||||
func (c Client) CopyObject(dst DestinationInfo, src SourceInfo) error {
|
||||
return c.ComposeObject(dst, []SourceInfo{src})
|
||||
}
|
||||
|
||||
46
vendor/src/github.com/minio/minio-go/api-put-object-encrypted.go
vendored
Normal file
46
vendor/src/github.com/minio/minio-go/api-put-object-encrypted.go
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
/*
|
||||
* Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package minio
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
"github.com/minio/minio-go/pkg/encrypt"
|
||||
)
|
||||
|
||||
// PutEncryptedObject - Encrypt and store object.
|
||||
func (c Client) PutEncryptedObject(bucketName, objectName string, reader io.Reader, encryptMaterials encrypt.Materials, metadata map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
|
||||
if encryptMaterials == nil {
|
||||
return 0, ErrInvalidArgument("Unable to recognize empty encryption properties")
|
||||
}
|
||||
|
||||
if err := encryptMaterials.SetupEncryptMode(reader); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if metadata == nil {
|
||||
metadata = make(map[string][]string)
|
||||
}
|
||||
|
||||
// Set the necessary encryption headers, for future decryption.
|
||||
metadata[amzHeaderIV] = []string{encryptMaterials.GetIV()}
|
||||
metadata[amzHeaderKey] = []string{encryptMaterials.GetKey()}
|
||||
metadata[amzHeaderMatDesc] = []string{encryptMaterials.GetDesc()}
|
||||
|
||||
return c.putObjectMultipart(bucketName, objectName, encryptMaterials, -1, metadata, progress)
|
||||
}
|
||||
@@ -17,13 +17,9 @@
|
||||
package minio
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"mime"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
|
||||
"github.com/minio/minio-go/pkg/s3utils"
|
||||
)
|
||||
@@ -55,11 +51,6 @@ func (c Client) FPutObject(bucketName, objectName, filePath, contentType string)
|
||||
// Save the file size.
|
||||
fileSize := fileStat.Size()
|
||||
|
||||
// Check for largest object size allowed.
|
||||
if fileSize > int64(maxMultipartPutObjectSize) {
|
||||
return 0, ErrEntityTooLarge(fileSize, maxMultipartPutObjectSize, bucketName, objectName)
|
||||
}
|
||||
|
||||
objMetadata := make(map[string][]string)
|
||||
|
||||
// Set contentType based on filepath extension if not given or default
|
||||
@@ -71,195 +62,5 @@ func (c Client) FPutObject(bucketName, objectName, filePath, contentType string)
|
||||
}
|
||||
|
||||
objMetadata["Content-Type"] = []string{contentType}
|
||||
|
||||
// NOTE: Google Cloud Storage multipart Put is not compatible with Amazon S3 APIs.
|
||||
if s3utils.IsGoogleEndpoint(c.endpointURL) {
|
||||
// Do not compute MD5 for Google Cloud Storage.
|
||||
return c.putObjectNoChecksum(bucketName, objectName, fileReader, fileSize, objMetadata, nil)
|
||||
}
|
||||
|
||||
// Small object upload is initiated for uploads for input data size smaller than 5MiB.
|
||||
if fileSize < minPartSize && fileSize >= 0 {
|
||||
return c.putObjectSingle(bucketName, objectName, fileReader, fileSize, objMetadata, nil)
|
||||
}
|
||||
|
||||
// Upload all large objects as multipart.
|
||||
n, err = c.putObjectMultipartFromFile(bucketName, objectName, fileReader, fileSize, objMetadata, nil)
|
||||
if err != nil {
|
||||
errResp := ToErrorResponse(err)
|
||||
// Verify if multipart functionality is not available, if not
|
||||
// fall back to single PutObject operation.
|
||||
if errResp.Code == "NotImplemented" {
|
||||
// If size of file is greater than '5GiB' fail.
|
||||
if fileSize > maxSinglePutObjectSize {
|
||||
return 0, ErrEntityTooLarge(fileSize, maxSinglePutObjectSize, bucketName, objectName)
|
||||
}
|
||||
// Fall back to uploading as single PutObject operation.
|
||||
return c.putObjectSingle(bucketName, objectName, fileReader, fileSize, objMetadata, nil)
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// putObjectMultipartFromFile - Creates object from contents of *os.File
|
||||
//
|
||||
// NOTE: This function is meant to be used for readers with local
|
||||
// file as in *os.File. This function effectively utilizes file
|
||||
// system capabilities of reading from specific sections and not
|
||||
// having to create temporary files.
|
||||
func (c Client) putObjectMultipartFromFile(bucketName, objectName string, fileReader io.ReaderAt, fileSize int64, metaData map[string][]string, progress io.Reader) (int64, error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if err := s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Initiate a new multipart upload.
|
||||
uploadID, err := c.newUploadID(bucketName, objectName, metaData)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Total data read and written to server. should be equal to 'size' at the end of the call.
|
||||
var totalUploadedSize int64
|
||||
|
||||
// Complete multipart upload.
|
||||
var complMultipartUpload completeMultipartUpload
|
||||
|
||||
// Calculate the optimal parts info for a given size.
|
||||
totalPartsCount, partSize, lastPartSize, err := optimalPartInfo(fileSize)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Create a channel to communicate a part was uploaded.
|
||||
// Buffer this to 10000, the maximum number of parts allowed by S3.
|
||||
uploadedPartsCh := make(chan uploadedPartRes, 10000)
|
||||
|
||||
// Create a channel to communicate which part to upload.
|
||||
// Buffer this to 10000, the maximum number of parts allowed by S3.
|
||||
uploadPartsCh := make(chan uploadPartReq, 10000)
|
||||
|
||||
// Just for readability.
|
||||
lastPartNumber := totalPartsCount
|
||||
|
||||
// Initialize parts uploaded map.
|
||||
partsInfo := make(map[int]ObjectPart)
|
||||
|
||||
// Send each part through the partUploadCh to be uploaded.
|
||||
for p := 1; p <= totalPartsCount; p++ {
|
||||
part, ok := partsInfo[p]
|
||||
if ok {
|
||||
uploadPartsCh <- uploadPartReq{PartNum: p, Part: &part}
|
||||
} else {
|
||||
uploadPartsCh <- uploadPartReq{PartNum: p, Part: nil}
|
||||
}
|
||||
}
|
||||
close(uploadPartsCh)
|
||||
|
||||
// Use three 'workers' to upload parts in parallel.
|
||||
for w := 1; w <= totalWorkers; w++ {
|
||||
go func() {
|
||||
// Deal with each part as it comes through the channel.
|
||||
for uploadReq := range uploadPartsCh {
|
||||
// Add hash algorithms that need to be calculated by computeHash()
|
||||
// In case of a non-v4 signature or https connection, sha256 is not needed.
|
||||
hashAlgos, hashSums := c.hashMaterials()
|
||||
|
||||
// If partNumber was not uploaded we calculate the missing
|
||||
// part offset and size. For all other part numbers we
|
||||
// calculate offset based on multiples of partSize.
|
||||
readOffset := int64(uploadReq.PartNum-1) * partSize
|
||||
missingPartSize := partSize
|
||||
|
||||
// As a special case if partNumber is lastPartNumber, we
|
||||
// calculate the offset based on the last part size.
|
||||
if uploadReq.PartNum == lastPartNumber {
|
||||
readOffset = (fileSize - lastPartSize)
|
||||
missingPartSize = lastPartSize
|
||||
}
|
||||
|
||||
// Get a section reader on a particular offset.
|
||||
sectionReader := io.NewSectionReader(fileReader, readOffset, missingPartSize)
|
||||
var prtSize int64
|
||||
var err error
|
||||
|
||||
prtSize, err = computeHash(hashAlgos, hashSums, sectionReader)
|
||||
if err != nil {
|
||||
uploadedPartsCh <- uploadedPartRes{
|
||||
Error: err,
|
||||
}
|
||||
// Exit the goroutine.
|
||||
return
|
||||
}
|
||||
|
||||
// Proceed to upload the part.
|
||||
var objPart ObjectPart
|
||||
objPart, err = c.uploadPart(bucketName, objectName, uploadID, sectionReader, uploadReq.PartNum,
|
||||
hashSums["md5"], hashSums["sha256"], prtSize)
|
||||
if err != nil {
|
||||
uploadedPartsCh <- uploadedPartRes{
|
||||
Error: err,
|
||||
}
|
||||
// Exit the goroutine.
|
||||
return
|
||||
}
|
||||
|
||||
// Save successfully uploaded part metadata.
|
||||
uploadReq.Part = &objPart
|
||||
|
||||
// Return through the channel the part size.
|
||||
uploadedPartsCh <- uploadedPartRes{
|
||||
Size: missingPartSize,
|
||||
PartNum: uploadReq.PartNum,
|
||||
Part: uploadReq.Part,
|
||||
Error: nil,
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Retrieve each uploaded part once it is done.
|
||||
for u := 1; u <= totalPartsCount; u++ {
|
||||
uploadRes := <-uploadedPartsCh
|
||||
if uploadRes.Error != nil {
|
||||
return totalUploadedSize, uploadRes.Error
|
||||
}
|
||||
// Retrieve each uploaded part and store it to be completed.
|
||||
part := uploadRes.Part
|
||||
if part == nil {
|
||||
return totalUploadedSize, ErrInvalidArgument(fmt.Sprintf("Missing part number %d", uploadRes.PartNum))
|
||||
}
|
||||
// Update the total uploaded size.
|
||||
totalUploadedSize += uploadRes.Size
|
||||
// Update the progress bar if there is one.
|
||||
if progress != nil {
|
||||
if _, err = io.CopyN(ioutil.Discard, progress, uploadRes.Size); err != nil {
|
||||
return totalUploadedSize, err
|
||||
}
|
||||
}
|
||||
// Store the part to be completed.
|
||||
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
|
||||
ETag: part.ETag,
|
||||
PartNumber: part.PartNumber,
|
||||
})
|
||||
}
|
||||
|
||||
// Verify if we uploaded all data.
|
||||
if totalUploadedSize != fileSize {
|
||||
return totalUploadedSize, ErrUnexpectedEOF(totalUploadedSize, fileSize, bucketName, objectName)
|
||||
}
|
||||
|
||||
// Sort all completed parts.
|
||||
sort.Sort(completedParts(complMultipartUpload.Parts))
|
||||
_, err = c.completeMultipartUpload(bucketName, objectName, uploadID, complMultipartUpload)
|
||||
if err != nil {
|
||||
return totalUploadedSize, err
|
||||
}
|
||||
|
||||
// Return final size.
|
||||
return totalUploadedSize, nil
|
||||
return c.putObjectCommon(bucketName, objectName, fileReader, fileSize, objMetadata, nil)
|
||||
}
|
||||
|
||||
@@ -24,7 +24,6 @@ import (
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
@@ -32,161 +31,60 @@ import (
|
||||
"github.com/minio/minio-go/pkg/s3utils"
|
||||
)
|
||||
|
||||
// Comprehensive put object operation involving multipart uploads.
|
||||
//
|
||||
// Following code handles these types of readers.
|
||||
//
|
||||
// - *os.File
|
||||
// - *minio.Object
|
||||
// - Any reader which has a method 'ReadAt()'
|
||||
//
|
||||
func (c Client) putObjectMultipart(bucketName, objectName string, reader io.Reader, size int64, metaData map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
if size > 0 && size > minPartSize {
|
||||
// Verify if reader is *os.File, then use file system functionalities.
|
||||
if isFile(reader) {
|
||||
return c.putObjectMultipartFromFile(bucketName, objectName, reader.(*os.File), size, metaData, progress)
|
||||
}
|
||||
// Verify if reader is *minio.Object or io.ReaderAt.
|
||||
// NOTE: Verification of object is kept for a specific purpose
|
||||
// while it is going to be duck typed similar to io.ReaderAt.
|
||||
// It is to indicate that *minio.Object implements io.ReaderAt.
|
||||
// and such a functionality is used in the subsequent code
|
||||
// path.
|
||||
if isObject(reader) || isReadAt(reader) {
|
||||
return c.putObjectMultipartFromReadAt(bucketName, objectName, reader.(io.ReaderAt), size, metaData, progress)
|
||||
func (c Client) putObjectMultipart(bucketName, objectName string, reader io.Reader, size int64,
|
||||
metadata map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
n, err = c.putObjectMultipartNoStream(bucketName, objectName, reader, size, metadata, progress)
|
||||
if err != nil {
|
||||
errResp := ToErrorResponse(err)
|
||||
// Verify if multipart functionality is not available, if not
|
||||
// fall back to single PutObject operation.
|
||||
if errResp.Code == "AccessDenied" && strings.Contains(errResp.Message, "Access Denied") {
|
||||
// Verify if size of reader is greater than '5GiB'.
|
||||
if size > maxSinglePutObjectSize {
|
||||
return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName)
|
||||
}
|
||||
// Fall back to uploading as single PutObject operation.
|
||||
return c.putObjectNoChecksum(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
}
|
||||
// For any other data size and reader type we do generic multipart
|
||||
// approach by staging data in temporary files and uploading them.
|
||||
return c.putObjectMultipartStream(bucketName, objectName, reader, size, metaData, progress)
|
||||
return n, err
|
||||
}
|
||||
|
||||
// putObjectMultipartStreamNoChecksum - upload a large object using
|
||||
// multipart upload and streaming signature for signing payload.
|
||||
func (c Client) putObjectMultipartStreamNoChecksum(bucketName, objectName string,
|
||||
reader io.Reader, size int64, metadata map[string][]string, progress io.Reader) (int64, error) {
|
||||
|
||||
func (c Client) putObjectMultipartNoStream(bucketName, objectName string, reader io.Reader, size int64,
|
||||
metadata map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if err := s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
if err = s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Initiates a new multipart request
|
||||
uploadID, err := c.newUploadID(bucketName, objectName, metadata)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Calculate the optimal parts info for a given size.
|
||||
totalPartsCount, partSize, lastPartSize, err := optimalPartInfo(size)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Total data read and written to server. should be equal to 'size' at the end of the call.
|
||||
var totalUploadedSize int64
|
||||
|
||||
// Initialize parts uploaded map.
|
||||
partsInfo := make(map[int]ObjectPart)
|
||||
|
||||
// Part number always starts with '1'.
|
||||
var partNumber int
|
||||
for partNumber = 1; partNumber <= totalPartsCount; partNumber++ {
|
||||
// Update progress reader appropriately to the latest offset
|
||||
// as we read from the source.
|
||||
hookReader := newHook(reader, progress)
|
||||
|
||||
// Proceed to upload the part.
|
||||
if partNumber == totalPartsCount {
|
||||
partSize = lastPartSize
|
||||
}
|
||||
|
||||
var objPart ObjectPart
|
||||
objPart, err = c.uploadPart(bucketName, objectName, uploadID,
|
||||
io.LimitReader(hookReader, partSize), partNumber, nil, nil, partSize)
|
||||
// For unknown size, Read EOF we break away.
|
||||
// We do not have to upload till totalPartsCount.
|
||||
if err == io.EOF && size < 0 {
|
||||
break
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return totalUploadedSize, err
|
||||
}
|
||||
|
||||
// Save successfully uploaded part metadata.
|
||||
partsInfo[partNumber] = objPart
|
||||
|
||||
// Save successfully uploaded size.
|
||||
totalUploadedSize += partSize
|
||||
}
|
||||
|
||||
// Verify if we uploaded all the data.
|
||||
if size > 0 {
|
||||
if totalUploadedSize != size {
|
||||
return totalUploadedSize, ErrUnexpectedEOF(totalUploadedSize, size, bucketName, objectName)
|
||||
}
|
||||
}
|
||||
|
||||
// Complete multipart upload.
|
||||
var complMultipartUpload completeMultipartUpload
|
||||
|
||||
// Loop over total uploaded parts to save them in
|
||||
// Parts array before completing the multipart request.
|
||||
for i := 1; i < partNumber; i++ {
|
||||
part, ok := partsInfo[i]
|
||||
if !ok {
|
||||
return 0, ErrInvalidArgument(fmt.Sprintf("Missing part number %d", i))
|
||||
}
|
||||
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
|
||||
ETag: part.ETag,
|
||||
PartNumber: part.PartNumber,
|
||||
})
|
||||
}
|
||||
|
||||
// Sort all completed parts.
|
||||
sort.Sort(completedParts(complMultipartUpload.Parts))
|
||||
_, err = c.completeMultipartUpload(bucketName, objectName, uploadID, complMultipartUpload)
|
||||
if err != nil {
|
||||
return totalUploadedSize, err
|
||||
}
|
||||
|
||||
// Return final size.
|
||||
return totalUploadedSize, nil
|
||||
}
|
||||
|
||||
// putObjectStream uploads files bigger than 64MiB, and also supports
|
||||
// special case where size is unknown i.e '-1'.
|
||||
func (c Client) putObjectMultipartStream(bucketName, objectName string, reader io.Reader, size int64, metaData map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if err := s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Total data read and written to server. should be equal to 'size' at the end of the call.
|
||||
// Total data read and written to server. should be equal to
|
||||
// 'size' at the end of the call.
|
||||
var totalUploadedSize int64
|
||||
|
||||
// Complete multipart upload.
|
||||
var complMultipartUpload completeMultipartUpload
|
||||
|
||||
// Initiate a new multipart upload.
|
||||
uploadID, err := c.newUploadID(bucketName, objectName, metaData)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Calculate the optimal parts info for a given size.
|
||||
totalPartsCount, partSize, _, err := optimalPartInfo(size)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Initiate a new multipart upload.
|
||||
uploadID, err := c.newUploadID(bucketName, objectName, metadata)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
defer func() {
|
||||
if err != nil {
|
||||
c.abortMultipartUpload(bucketName, objectName, uploadID)
|
||||
}
|
||||
}()
|
||||
|
||||
// Part number always starts with '1'.
|
||||
partNumber := 1
|
||||
|
||||
@@ -197,8 +95,9 @@ func (c Client) putObjectMultipartStream(bucketName, objectName string, reader i
|
||||
partsInfo := make(map[int]ObjectPart)
|
||||
|
||||
for partNumber <= totalPartsCount {
|
||||
// Choose hash algorithms to be calculated by hashCopyN, avoid sha256
|
||||
// with non-v4 signature request or HTTPS connection
|
||||
// Choose hash algorithms to be calculated by hashCopyN,
|
||||
// avoid sha256 with non-v4 signature request or
|
||||
// HTTPS connection.
|
||||
hashAlgos, hashSums := c.hashMaterials()
|
||||
|
||||
// Calculates hash sums while copying partSize bytes into tmpBuffer.
|
||||
@@ -214,7 +113,8 @@ func (c Client) putObjectMultipartStream(bucketName, objectName string, reader i
|
||||
|
||||
// Proceed to upload the part.
|
||||
var objPart ObjectPart
|
||||
objPart, err = c.uploadPart(bucketName, objectName, uploadID, reader, partNumber, hashSums["md5"], hashSums["sha256"], prtSize)
|
||||
objPart, err = c.uploadPart(bucketName, objectName, uploadID, reader, partNumber,
|
||||
hashSums["md5"], hashSums["sha256"], prtSize, metadata)
|
||||
if err != nil {
|
||||
// Reset the temporary buffer upon any error.
|
||||
tmpBuffer.Reset()
|
||||
@@ -224,13 +124,6 @@ func (c Client) putObjectMultipartStream(bucketName, objectName string, reader i
|
||||
// Save successfully uploaded part metadata.
|
||||
partsInfo[partNumber] = objPart
|
||||
|
||||
// Update the progress reader for the skipped part.
|
||||
if progress != nil {
|
||||
if _, err = io.CopyN(ioutil.Discard, progress, prtSize); err != nil {
|
||||
return totalUploadedSize, err
|
||||
}
|
||||
}
|
||||
|
||||
// Reset the temporary buffer.
|
||||
tmpBuffer.Reset()
|
||||
|
||||
@@ -269,8 +162,7 @@ func (c Client) putObjectMultipartStream(bucketName, objectName string, reader i
|
||||
|
||||
// Sort all completed parts.
|
||||
sort.Sort(completedParts(complMultipartUpload.Parts))
|
||||
_, err = c.completeMultipartUpload(bucketName, objectName, uploadID, complMultipartUpload)
|
||||
if err != nil {
|
||||
if _, err = c.completeMultipartUpload(bucketName, objectName, uploadID, complMultipartUpload); err != nil {
|
||||
return totalUploadedSize, err
|
||||
}
|
||||
|
||||
@@ -279,7 +171,7 @@ func (c Client) putObjectMultipartStream(bucketName, objectName string, reader i
|
||||
}
|
||||
|
||||
// initiateMultipartUpload - Initiates a multipart upload and returns an upload ID.
|
||||
func (c Client) initiateMultipartUpload(bucketName, objectName string, metaData map[string][]string) (initiateMultipartUploadResult, error) {
|
||||
func (c Client) initiateMultipartUpload(bucketName, objectName string, metadata map[string][]string) (initiateMultipartUploadResult, error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return initiateMultipartUploadResult{}, err
|
||||
@@ -294,14 +186,14 @@ func (c Client) initiateMultipartUpload(bucketName, objectName string, metaData
|
||||
|
||||
// Set ContentType header.
|
||||
customHeader := make(http.Header)
|
||||
for k, v := range metaData {
|
||||
for k, v := range metadata {
|
||||
if len(v) > 0 {
|
||||
customHeader.Set(k, v[0])
|
||||
}
|
||||
}
|
||||
|
||||
// Set a default content-type header if the latter is not provided
|
||||
if v, ok := metaData["Content-Type"]; !ok || len(v) == 0 {
|
||||
if v, ok := metadata["Content-Type"]; !ok || len(v) == 0 {
|
||||
customHeader.Set("Content-Type", "application/octet-stream")
|
||||
}
|
||||
|
||||
@@ -332,8 +224,11 @@ func (c Client) initiateMultipartUpload(bucketName, objectName string, metaData
|
||||
return initiateMultipartUploadResult, nil
|
||||
}
|
||||
|
||||
const serverEncryptionKeyPrefix = "x-amz-server-side-encryption"
|
||||
|
||||
// uploadPart - Uploads a part in a multipart upload.
|
||||
func (c Client) uploadPart(bucketName, objectName, uploadID string, reader io.Reader, partNumber int, md5Sum, sha256Sum []byte, size int64) (ObjectPart, error) {
|
||||
func (c Client) uploadPart(bucketName, objectName, uploadID string, reader io.Reader,
|
||||
partNumber int, md5Sum, sha256Sum []byte, size int64, metadata map[string][]string) (ObjectPart, error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return ObjectPart{}, err
|
||||
@@ -361,10 +256,21 @@ func (c Client) uploadPart(bucketName, objectName, uploadID string, reader io.Re
|
||||
// Set upload id.
|
||||
urlValues.Set("uploadId", uploadID)
|
||||
|
||||
// Set encryption headers, if any.
|
||||
customHeader := make(http.Header)
|
||||
for k, v := range metadata {
|
||||
if len(v) > 0 {
|
||||
if strings.HasPrefix(strings.ToLower(k), serverEncryptionKeyPrefix) {
|
||||
customHeader.Set(k, v[0])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
reqMetadata := requestMetadata{
|
||||
bucketName: bucketName,
|
||||
objectName: objectName,
|
||||
queryValues: urlValues,
|
||||
customHeader: customHeader,
|
||||
contentBody: reader,
|
||||
contentLength: size,
|
||||
contentMD5Bytes: md5Sum,
|
||||
@@ -393,7 +299,8 @@ func (c Client) uploadPart(bucketName, objectName, uploadID string, reader io.Re
|
||||
}
|
||||
|
||||
// completeMultipartUpload - Completes a multipart upload by assembling previously uploaded parts.
|
||||
func (c Client) completeMultipartUpload(bucketName, objectName, uploadID string, complete completeMultipartUpload) (completeMultipartUploadResult, error) {
|
||||
func (c Client) completeMultipartUpload(bucketName, objectName, uploadID string,
|
||||
complete completeMultipartUpload) (completeMultipartUploadResult, error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return completeMultipartUploadResult{}, err
|
||||
|
||||
@@ -1,191 +0,0 @@
|
||||
/*
|
||||
* Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package minio
|
||||
|
||||
import (
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
"github.com/minio/minio-go/pkg/credentials"
|
||||
"github.com/minio/minio-go/pkg/encrypt"
|
||||
"github.com/minio/minio-go/pkg/s3utils"
|
||||
)
|
||||
|
||||
// PutObjectWithProgress - with progress.
|
||||
func (c Client) PutObjectWithProgress(bucketName, objectName string, reader io.Reader, contentType string, progress io.Reader) (n int64, err error) {
|
||||
metaData := make(map[string][]string)
|
||||
metaData["Content-Type"] = []string{contentType}
|
||||
return c.PutObjectWithMetadata(bucketName, objectName, reader, metaData, progress)
|
||||
}
|
||||
|
||||
// PutEncryptedObject - Encrypt and store object.
|
||||
func (c Client) PutEncryptedObject(bucketName, objectName string, reader io.Reader, encryptMaterials encrypt.Materials, metaData map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
|
||||
if encryptMaterials == nil {
|
||||
return 0, ErrInvalidArgument("Unable to recognize empty encryption properties")
|
||||
}
|
||||
|
||||
if err := encryptMaterials.SetupEncryptMode(reader); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if metaData == nil {
|
||||
metaData = make(map[string][]string)
|
||||
}
|
||||
|
||||
// Set the necessary encryption headers, for future decryption.
|
||||
metaData[amzHeaderIV] = []string{encryptMaterials.GetIV()}
|
||||
metaData[amzHeaderKey] = []string{encryptMaterials.GetKey()}
|
||||
metaData[amzHeaderMatDesc] = []string{encryptMaterials.GetDesc()}
|
||||
|
||||
return c.PutObjectWithMetadata(bucketName, objectName, encryptMaterials, metaData, progress)
|
||||
}
|
||||
|
||||
// PutObjectWithMetadata - with metadata.
|
||||
func (c Client) PutObjectWithMetadata(bucketName, objectName string, reader io.Reader, metaData map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if err := s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if reader == nil {
|
||||
return 0, ErrInvalidArgument("Input reader is invalid, cannot be nil.")
|
||||
}
|
||||
|
||||
// Size of the object.
|
||||
var size int64
|
||||
|
||||
// Get reader size.
|
||||
size, err = getReaderSize(reader)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Check for largest object size allowed.
|
||||
if size > int64(maxMultipartPutObjectSize) {
|
||||
return 0, ErrEntityTooLarge(size, maxMultipartPutObjectSize, bucketName, objectName)
|
||||
}
|
||||
|
||||
// NOTE: Google Cloud Storage does not implement Amazon S3 Compatible multipart PUT.
|
||||
if s3utils.IsGoogleEndpoint(c.endpointURL) {
|
||||
// Do not compute MD5 for Google Cloud Storage.
|
||||
return c.putObjectNoChecksum(bucketName, objectName, reader, size, metaData, progress)
|
||||
}
|
||||
|
||||
// putSmall object.
|
||||
if size < minPartSize && size >= 0 {
|
||||
return c.putObjectSingle(bucketName, objectName, reader, size, metaData, progress)
|
||||
}
|
||||
|
||||
// For all sizes greater than 5MiB do multipart.
|
||||
n, err = c.putObjectMultipart(bucketName, objectName, reader, size, metaData, progress)
|
||||
if err != nil {
|
||||
errResp := ToErrorResponse(err)
|
||||
// Verify if multipart functionality is not available, if not
|
||||
// fall back to single PutObject operation.
|
||||
if errResp.Code == "AccessDenied" && strings.Contains(errResp.Message, "Access Denied") {
|
||||
// Verify if size of reader is greater than '5GiB'.
|
||||
if size > maxSinglePutObjectSize {
|
||||
return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName)
|
||||
}
|
||||
// Fall back to uploading as single PutObject operation.
|
||||
return c.putObjectSingle(bucketName, objectName, reader, size, metaData, progress)
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// PutObjectStreaming using AWS streaming signature V4
|
||||
func (c Client) PutObjectStreaming(bucketName, objectName string, reader io.Reader) (n int64, err error) {
|
||||
return c.PutObjectStreamingWithProgress(bucketName, objectName, reader, nil, nil)
|
||||
}
|
||||
|
||||
// PutObjectStreamingWithMetadata using AWS streaming signature V4
|
||||
func (c Client) PutObjectStreamingWithMetadata(bucketName, objectName string, reader io.Reader, metadata map[string][]string) (n int64, err error) {
|
||||
return c.PutObjectStreamingWithProgress(bucketName, objectName, reader, metadata, nil)
|
||||
}
|
||||
|
||||
// PutObjectStreamingWithProgress using AWS streaming signature V4
|
||||
func (c Client) PutObjectStreamingWithProgress(bucketName, objectName string, reader io.Reader, metadata map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
// NOTE: Streaming signature is not supported by GCS.
|
||||
if s3utils.IsGoogleEndpoint(c.endpointURL) {
|
||||
return 0, ErrorResponse{
|
||||
Code: "NotImplemented",
|
||||
Message: "AWS streaming signature v4 is not supported with Google Cloud Storage",
|
||||
Key: objectName,
|
||||
BucketName: bucketName,
|
||||
}
|
||||
}
|
||||
|
||||
if c.overrideSignerType.IsV2() {
|
||||
return 0, ErrorResponse{
|
||||
Code: "NotImplemented",
|
||||
Message: "AWS streaming signature v4 is not supported with minio client initialized for AWS signature v2",
|
||||
Key: objectName,
|
||||
BucketName: bucketName,
|
||||
}
|
||||
}
|
||||
|
||||
// Size of the object.
|
||||
var size int64
|
||||
|
||||
// Get reader size.
|
||||
size, err = getReaderSize(reader)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Check for largest object size allowed.
|
||||
if size > int64(maxMultipartPutObjectSize) {
|
||||
return 0, ErrEntityTooLarge(size, maxMultipartPutObjectSize, bucketName, objectName)
|
||||
}
|
||||
|
||||
// If size cannot be found on a stream, it is not possible
|
||||
// to upload using streaming signature, fall back to multipart.
|
||||
if size < 0 {
|
||||
return c.putObjectMultipartStream(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
|
||||
// Set streaming signature.
|
||||
c.overrideSignerType = credentials.SignatureV4Streaming
|
||||
|
||||
if size < minPartSize && size >= 0 {
|
||||
return c.putObjectNoChecksum(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
|
||||
// For all sizes greater than 64MiB do multipart.
|
||||
n, err = c.putObjectMultipartStreamNoChecksum(bucketName, objectName, reader, size, metadata, progress)
|
||||
if err != nil {
|
||||
errResp := ToErrorResponse(err)
|
||||
// Verify if multipart functionality is not available, if not
|
||||
// fall back to single PutObject operation.
|
||||
if errResp.Code == "AccessDenied" && strings.Contains(errResp.Message, "Access Denied") {
|
||||
// Verify if size of reader is greater than '5GiB'.
|
||||
if size > maxSinglePutObjectSize {
|
||||
return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName)
|
||||
}
|
||||
// Fall back to uploading as single PutObject operation.
|
||||
return c.putObjectNoChecksum(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
|
||||
return n, nil
|
||||
}
|
||||
@@ -1,219 +0,0 @@
|
||||
/*
|
||||
* Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015, 2016 Minio, Inc.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package minio
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"sort"
|
||||
|
||||
"github.com/minio/minio-go/pkg/s3utils"
|
||||
)
|
||||
|
||||
// uploadedPartRes - the response received from a part upload.
|
||||
type uploadedPartRes struct {
|
||||
Error error // Any error encountered while uploading the part.
|
||||
PartNum int // Number of the part uploaded.
|
||||
Size int64 // Size of the part uploaded.
|
||||
Part *ObjectPart
|
||||
}
|
||||
|
||||
type uploadPartReq struct {
|
||||
PartNum int // Number of the part uploaded.
|
||||
Part *ObjectPart // Size of the part uploaded.
|
||||
}
|
||||
|
||||
// putObjectMultipartFromReadAt - Uploads files bigger than 5MiB. Supports reader
|
||||
// of type which implements io.ReaderAt interface (ReadAt method).
|
||||
//
|
||||
// NOTE: This function is meant to be used for all readers which
|
||||
// implement io.ReaderAt which allows us for resuming multipart
|
||||
// uploads but reading at an offset, which would avoid re-read the
|
||||
// data which was already uploaded. Internally this function uses
|
||||
// temporary files for staging all the data, these temporary files are
|
||||
// cleaned automatically when the caller i.e http client closes the
|
||||
// stream after uploading all the contents successfully.
|
||||
func (c Client) putObjectMultipartFromReadAt(bucketName, objectName string, reader io.ReaderAt, size int64, metaData map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if err := s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Initiate a new multipart upload.
|
||||
uploadID, err := c.newUploadID(bucketName, objectName, metaData)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Total data read and written to server. should be equal to 'size' at the end of the call.
|
||||
var totalUploadedSize int64
|
||||
|
||||
// Complete multipart upload.
|
||||
var complMultipartUpload completeMultipartUpload
|
||||
|
||||
// Calculate the optimal parts info for a given size.
|
||||
totalPartsCount, partSize, lastPartSize, err := optimalPartInfo(size)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Declare a channel that sends the next part number to be uploaded.
|
||||
// Buffered to 10000 because thats the maximum number of parts allowed
|
||||
// by S3.
|
||||
uploadPartsCh := make(chan uploadPartReq, 10000)
|
||||
|
||||
// Declare a channel that sends back the response of a part upload.
|
||||
// Buffered to 10000 because thats the maximum number of parts allowed
|
||||
// by S3.
|
||||
uploadedPartsCh := make(chan uploadedPartRes, 10000)
|
||||
|
||||
// Used for readability, lastPartNumber is always totalPartsCount.
|
||||
lastPartNumber := totalPartsCount
|
||||
|
||||
// Initialize parts uploaded map.
|
||||
partsInfo := make(map[int]ObjectPart)
|
||||
|
||||
// Send each part number to the channel to be processed.
|
||||
for p := 1; p <= totalPartsCount; p++ {
|
||||
part, ok := partsInfo[p]
|
||||
if ok {
|
||||
uploadPartsCh <- uploadPartReq{PartNum: p, Part: &part}
|
||||
} else {
|
||||
uploadPartsCh <- uploadPartReq{PartNum: p, Part: nil}
|
||||
}
|
||||
}
|
||||
close(uploadPartsCh)
|
||||
|
||||
// Receive each part number from the channel allowing three parallel uploads.
|
||||
for w := 1; w <= totalWorkers; w++ {
|
||||
go func() {
|
||||
// Read defaults to reading at 5MiB buffer.
|
||||
readAtBuffer := make([]byte, optimalReadBufferSize)
|
||||
|
||||
// Each worker will draw from the part channel and upload in parallel.
|
||||
for uploadReq := range uploadPartsCh {
|
||||
// Declare a new tmpBuffer.
|
||||
tmpBuffer := new(bytes.Buffer)
|
||||
|
||||
// If partNumber was not uploaded we calculate the missing
|
||||
// part offset and size. For all other part numbers we
|
||||
// calculate offset based on multiples of partSize.
|
||||
readOffset := int64(uploadReq.PartNum-1) * partSize
|
||||
missingPartSize := partSize
|
||||
|
||||
// As a special case if partNumber is lastPartNumber, we
|
||||
// calculate the offset based on the last part size.
|
||||
if uploadReq.PartNum == lastPartNumber {
|
||||
readOffset = (size - lastPartSize)
|
||||
missingPartSize = lastPartSize
|
||||
}
|
||||
|
||||
// Get a section reader on a particular offset.
|
||||
sectionReader := io.NewSectionReader(reader, readOffset, missingPartSize)
|
||||
|
||||
// Choose the needed hash algorithms to be calculated by hashCopyBuffer.
|
||||
// Sha256 is avoided in non-v4 signature requests or HTTPS connections
|
||||
hashAlgos, hashSums := c.hashMaterials()
|
||||
|
||||
var prtSize int64
|
||||
var err error
|
||||
prtSize, err = hashCopyBuffer(hashAlgos, hashSums, tmpBuffer, sectionReader, readAtBuffer)
|
||||
if err != nil {
|
||||
// Send the error back through the channel.
|
||||
uploadedPartsCh <- uploadedPartRes{
|
||||
Size: 0,
|
||||
Error: err,
|
||||
}
|
||||
// Exit the goroutine.
|
||||
return
|
||||
}
|
||||
|
||||
// Proceed to upload the part.
|
||||
var objPart ObjectPart
|
||||
objPart, err = c.uploadPart(bucketName, objectName, uploadID, tmpBuffer,
|
||||
uploadReq.PartNum, hashSums["md5"], hashSums["sha256"], prtSize)
|
||||
if err != nil {
|
||||
uploadedPartsCh <- uploadedPartRes{
|
||||
Size: 0,
|
||||
Error: err,
|
||||
}
|
||||
// Exit the goroutine.
|
||||
return
|
||||
}
|
||||
|
||||
// Save successfully uploaded part metadata.
|
||||
uploadReq.Part = &objPart
|
||||
|
||||
// Send successful part info through the channel.
|
||||
uploadedPartsCh <- uploadedPartRes{
|
||||
Size: missingPartSize,
|
||||
PartNum: uploadReq.PartNum,
|
||||
Part: uploadReq.Part,
|
||||
Error: nil,
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Gather the responses as they occur and update any
|
||||
// progress bar.
|
||||
for u := 1; u <= totalPartsCount; u++ {
|
||||
uploadRes := <-uploadedPartsCh
|
||||
if uploadRes.Error != nil {
|
||||
return totalUploadedSize, uploadRes.Error
|
||||
}
|
||||
// Retrieve each uploaded part and store it to be completed.
|
||||
// part, ok := partsInfo[uploadRes.PartNum]
|
||||
part := uploadRes.Part
|
||||
if part == nil {
|
||||
return 0, ErrInvalidArgument(fmt.Sprintf("Missing part number %d", uploadRes.PartNum))
|
||||
}
|
||||
// Update the totalUploadedSize.
|
||||
totalUploadedSize += uploadRes.Size
|
||||
// Update the progress bar if there is one.
|
||||
if progress != nil {
|
||||
if _, err = io.CopyN(ioutil.Discard, progress, uploadRes.Size); err != nil {
|
||||
return totalUploadedSize, err
|
||||
}
|
||||
}
|
||||
// Store the parts to be completed in order.
|
||||
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
|
||||
ETag: part.ETag,
|
||||
PartNumber: part.PartNumber,
|
||||
})
|
||||
}
|
||||
|
||||
// Verify if we uploaded all the data.
|
||||
if totalUploadedSize != size {
|
||||
return totalUploadedSize, ErrUnexpectedEOF(totalUploadedSize, size, bucketName, objectName)
|
||||
}
|
||||
|
||||
// Sort all completed parts.
|
||||
sort.Sort(completedParts(complMultipartUpload.Parts))
|
||||
_, err = c.completeMultipartUpload(bucketName, objectName, uploadID, complMultipartUpload)
|
||||
if err != nil {
|
||||
return totalUploadedSize, err
|
||||
}
|
||||
|
||||
// Return final size.
|
||||
return totalUploadedSize, nil
|
||||
}
|
||||
436
vendor/src/github.com/minio/minio-go/api-put-object-streaming.go
vendored
Normal file
436
vendor/src/github.com/minio/minio-go/api-put-object-streaming.go
vendored
Normal file
@@ -0,0 +1,436 @@
|
||||
/*
|
||||
* Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2017 Minio, Inc.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package minio
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/minio/minio-go/pkg/s3utils"
|
||||
)
|
||||
|
||||
// PutObjectStreaming using AWS streaming signature V4
|
||||
func (c Client) PutObjectStreaming(bucketName, objectName string, reader io.Reader) (n int64, err error) {
|
||||
return c.PutObjectWithProgress(bucketName, objectName, reader, nil, nil)
|
||||
}
|
||||
|
||||
// putObjectMultipartStream - upload a large object using
|
||||
// multipart upload and streaming signature for signing payload.
|
||||
// Comprehensive put object operation involving multipart uploads.
|
||||
//
|
||||
// Following code handles these types of readers.
|
||||
//
|
||||
// - *os.File
|
||||
// - *minio.Object
|
||||
// - Any reader which has a method 'ReadAt()'
|
||||
//
|
||||
func (c Client) putObjectMultipartStream(bucketName, objectName string,
|
||||
reader io.Reader, size int64, metadata map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
|
||||
// Verify if reader is *minio.Object, *os.File or io.ReaderAt.
|
||||
// NOTE: Verification of object is kept for a specific purpose
|
||||
// while it is going to be duck typed similar to io.ReaderAt.
|
||||
// It is to indicate that *minio.Object implements io.ReaderAt.
|
||||
// and such a functionality is used in the subsequent code path.
|
||||
if isFile(reader) || !isObject(reader) && isReadAt(reader) {
|
||||
n, err = c.putObjectMultipartStreamFromReadAt(bucketName, objectName, reader.(io.ReaderAt), size, metadata, progress)
|
||||
} else {
|
||||
n, err = c.putObjectMultipartStreamNoChecksum(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
if err != nil {
|
||||
errResp := ToErrorResponse(err)
|
||||
// Verify if multipart functionality is not available, if not
|
||||
// fall back to single PutObject operation.
|
||||
if errResp.Code == "AccessDenied" && strings.Contains(errResp.Message, "Access Denied") {
|
||||
// Verify if size of reader is greater than '5GiB'.
|
||||
if size > maxSinglePutObjectSize {
|
||||
return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName)
|
||||
}
|
||||
// Fall back to uploading as single PutObject operation.
|
||||
return c.putObjectNoChecksum(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
|
||||
// uploadedPartRes - the response received from a part upload.
|
||||
type uploadedPartRes struct {
|
||||
Error error // Any error encountered while uploading the part.
|
||||
PartNum int // Number of the part uploaded.
|
||||
Size int64 // Size of the part uploaded.
|
||||
Part *ObjectPart
|
||||
}
|
||||
|
||||
type uploadPartReq struct {
|
||||
PartNum int // Number of the part uploaded.
|
||||
Part *ObjectPart // Size of the part uploaded.
|
||||
}
|
||||
|
||||
// putObjectMultipartFromReadAt - Uploads files bigger than 64MiB.
|
||||
// Supports all readers which implements io.ReaderAt interface
|
||||
// (ReadAt method).
|
||||
//
|
||||
// NOTE: This function is meant to be used for all readers which
|
||||
// implement io.ReaderAt which allows us for resuming multipart
|
||||
// uploads but reading at an offset, which would avoid re-read the
|
||||
// data which was already uploaded. Internally this function uses
|
||||
// temporary files for staging all the data, these temporary files are
|
||||
// cleaned automatically when the caller i.e http client closes the
|
||||
// stream after uploading all the contents successfully.
|
||||
func (c Client) putObjectMultipartStreamFromReadAt(bucketName, objectName string,
|
||||
reader io.ReaderAt, size int64, metadata map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
// Input validation.
|
||||
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if err = s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Calculate the optimal parts info for a given size.
|
||||
totalPartsCount, partSize, lastPartSize, err := optimalPartInfo(size)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Initiate a new multipart upload.
|
||||
uploadID, err := c.newUploadID(bucketName, objectName, metadata)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Aborts the multipart upload in progress, if the
|
||||
// function returns any error, since we do not resume
|
||||
// we should purge the parts which have been uploaded
|
||||
// to relinquish storage space.
|
||||
defer func() {
|
||||
if err != nil {
|
||||
c.abortMultipartUpload(bucketName, objectName, uploadID)
|
||||
}
|
||||
}()
|
||||
|
||||
// Total data read and written to server. should be equal to 'size' at the end of the call.
|
||||
var totalUploadedSize int64
|
||||
|
||||
// Complete multipart upload.
|
||||
var complMultipartUpload completeMultipartUpload
|
||||
|
||||
// Declare a channel that sends the next part number to be uploaded.
|
||||
// Buffered to 10000 because thats the maximum number of parts allowed
|
||||
// by S3.
|
||||
uploadPartsCh := make(chan uploadPartReq, 10000)
|
||||
|
||||
// Declare a channel that sends back the response of a part upload.
|
||||
// Buffered to 10000 because thats the maximum number of parts allowed
|
||||
// by S3.
|
||||
uploadedPartsCh := make(chan uploadedPartRes, 10000)
|
||||
|
||||
// Used for readability, lastPartNumber is always totalPartsCount.
|
||||
lastPartNumber := totalPartsCount
|
||||
|
||||
// Send each part number to the channel to be processed.
|
||||
for p := 1; p <= totalPartsCount; p++ {
|
||||
uploadPartsCh <- uploadPartReq{PartNum: p, Part: nil}
|
||||
}
|
||||
close(uploadPartsCh)
|
||||
|
||||
// Receive each part number from the channel allowing three parallel uploads.
|
||||
for w := 1; w <= totalWorkers; w++ {
|
||||
go func() {
|
||||
// Each worker will draw from the part channel and upload in parallel.
|
||||
for uploadReq := range uploadPartsCh {
|
||||
|
||||
// If partNumber was not uploaded we calculate the missing
|
||||
// part offset and size. For all other part numbers we
|
||||
// calculate offset based on multiples of partSize.
|
||||
readOffset := int64(uploadReq.PartNum-1) * partSize
|
||||
|
||||
// As a special case if partNumber is lastPartNumber, we
|
||||
// calculate the offset based on the last part size.
|
||||
if uploadReq.PartNum == lastPartNumber {
|
||||
readOffset = (size - lastPartSize)
|
||||
partSize = lastPartSize
|
||||
}
|
||||
|
||||
// Get a section reader on a particular offset.
|
||||
sectionReader := newHook(io.NewSectionReader(reader, readOffset, partSize), progress)
|
||||
|
||||
// Proceed to upload the part.
|
||||
var objPart ObjectPart
|
||||
objPart, err = c.uploadPart(bucketName, objectName, uploadID,
|
||||
sectionReader, uploadReq.PartNum,
|
||||
nil, nil, partSize, metadata)
|
||||
if err != nil {
|
||||
uploadedPartsCh <- uploadedPartRes{
|
||||
Size: 0,
|
||||
Error: err,
|
||||
}
|
||||
// Exit the goroutine.
|
||||
return
|
||||
}
|
||||
|
||||
// Save successfully uploaded part metadata.
|
||||
uploadReq.Part = &objPart
|
||||
|
||||
// Send successful part info through the channel.
|
||||
uploadedPartsCh <- uploadedPartRes{
|
||||
Size: objPart.Size,
|
||||
PartNum: uploadReq.PartNum,
|
||||
Part: uploadReq.Part,
|
||||
Error: nil,
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Gather the responses as they occur and update any
|
||||
// progress bar.
|
||||
for u := 1; u <= totalPartsCount; u++ {
|
||||
uploadRes := <-uploadedPartsCh
|
||||
if uploadRes.Error != nil {
|
||||
return totalUploadedSize, uploadRes.Error
|
||||
}
|
||||
// Retrieve each uploaded part and store it to be completed.
|
||||
// part, ok := partsInfo[uploadRes.PartNum]
|
||||
part := uploadRes.Part
|
||||
if part == nil {
|
||||
return 0, ErrInvalidArgument(fmt.Sprintf("Missing part number %d", uploadRes.PartNum))
|
||||
}
|
||||
// Update the totalUploadedSize.
|
||||
totalUploadedSize += uploadRes.Size
|
||||
// Store the parts to be completed in order.
|
||||
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
|
||||
ETag: part.ETag,
|
||||
PartNumber: part.PartNumber,
|
||||
})
|
||||
}
|
||||
|
||||
// Verify if we uploaded all the data.
|
||||
if totalUploadedSize != size {
|
||||
return totalUploadedSize, ErrUnexpectedEOF(totalUploadedSize, size, bucketName, objectName)
|
||||
}
|
||||
|
||||
// Sort all completed parts.
|
||||
sort.Sort(completedParts(complMultipartUpload.Parts))
|
||||
_, err = c.completeMultipartUpload(bucketName, objectName, uploadID, complMultipartUpload)
|
||||
if err != nil {
|
||||
return totalUploadedSize, err
|
||||
}
|
||||
|
||||
// Return final size.
|
||||
return totalUploadedSize, nil
|
||||
}
|
||||
|
||||
func (c Client) putObjectMultipartStreamNoChecksum(bucketName, objectName string,
|
||||
reader io.Reader, size int64, metadata map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
// Input validation.
|
||||
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if err = s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Calculate the optimal parts info for a given size.
|
||||
totalPartsCount, partSize, lastPartSize, err := optimalPartInfo(size)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Initiates a new multipart request
|
||||
uploadID, err := c.newUploadID(bucketName, objectName, metadata)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Aborts the multipart upload if the function returns
|
||||
// any error, since we do not resume we should purge
|
||||
// the parts which have been uploaded to relinquish
|
||||
// storage space.
|
||||
defer func() {
|
||||
if err != nil {
|
||||
c.abortMultipartUpload(bucketName, objectName, uploadID)
|
||||
}
|
||||
}()
|
||||
|
||||
// Total data read and written to server. should be equal to 'size' at the end of the call.
|
||||
var totalUploadedSize int64
|
||||
|
||||
// Initialize parts uploaded map.
|
||||
partsInfo := make(map[int]ObjectPart)
|
||||
|
||||
// Part number always starts with '1'.
|
||||
var partNumber int
|
||||
for partNumber = 1; partNumber <= totalPartsCount; partNumber++ {
|
||||
// Update progress reader appropriately to the latest offset
|
||||
// as we read from the source.
|
||||
hookReader := newHook(reader, progress)
|
||||
|
||||
// Proceed to upload the part.
|
||||
if partNumber == totalPartsCount {
|
||||
partSize = lastPartSize
|
||||
}
|
||||
|
||||
var objPart ObjectPart
|
||||
objPart, err = c.uploadPart(bucketName, objectName, uploadID,
|
||||
io.LimitReader(hookReader, partSize),
|
||||
partNumber, nil, nil, partSize, metadata)
|
||||
if err != nil {
|
||||
return totalUploadedSize, err
|
||||
}
|
||||
|
||||
// Save successfully uploaded part metadata.
|
||||
partsInfo[partNumber] = objPart
|
||||
|
||||
// Save successfully uploaded size.
|
||||
totalUploadedSize += partSize
|
||||
}
|
||||
|
||||
// Verify if we uploaded all the data.
|
||||
if size > 0 {
|
||||
if totalUploadedSize != size {
|
||||
return totalUploadedSize, ErrUnexpectedEOF(totalUploadedSize, size, bucketName, objectName)
|
||||
}
|
||||
}
|
||||
|
||||
// Complete multipart upload.
|
||||
var complMultipartUpload completeMultipartUpload
|
||||
|
||||
// Loop over total uploaded parts to save them in
|
||||
// Parts array before completing the multipart request.
|
||||
for i := 1; i < partNumber; i++ {
|
||||
part, ok := partsInfo[i]
|
||||
if !ok {
|
||||
return 0, ErrInvalidArgument(fmt.Sprintf("Missing part number %d", i))
|
||||
}
|
||||
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
|
||||
ETag: part.ETag,
|
||||
PartNumber: part.PartNumber,
|
||||
})
|
||||
}
|
||||
|
||||
// Sort all completed parts.
|
||||
sort.Sort(completedParts(complMultipartUpload.Parts))
|
||||
_, err = c.completeMultipartUpload(bucketName, objectName, uploadID, complMultipartUpload)
|
||||
if err != nil {
|
||||
return totalUploadedSize, err
|
||||
}
|
||||
|
||||
// Return final size.
|
||||
return totalUploadedSize, nil
|
||||
}
|
||||
|
||||
// putObjectNoChecksum special function used Google Cloud Storage. This special function
|
||||
// is used for Google Cloud Storage since Google's multipart API is not S3 compatible.
|
||||
func (c Client) putObjectNoChecksum(bucketName, objectName string, reader io.Reader, size int64, metaData map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if err := s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Size -1 is only supported on Google Cloud Storage, we error
|
||||
// out in all other situations.
|
||||
if size < 0 && !s3utils.IsGoogleEndpoint(c.endpointURL) {
|
||||
return 0, ErrEntityTooSmall(size, bucketName, objectName)
|
||||
}
|
||||
if size > 0 {
|
||||
if isReadAt(reader) && !isObject(reader) {
|
||||
reader = io.NewSectionReader(reader.(io.ReaderAt), 0, size)
|
||||
}
|
||||
}
|
||||
|
||||
// Update progress reader appropriately to the latest offset as we
|
||||
// read from the source.
|
||||
readSeeker := newHook(reader, progress)
|
||||
|
||||
// This function does not calculate sha256 and md5sum for payload.
|
||||
// Execute put object.
|
||||
st, err := c.putObjectDo(bucketName, objectName, readSeeker, nil, nil, size, metaData)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if st.Size != size {
|
||||
return 0, ErrUnexpectedEOF(st.Size, size, bucketName, objectName)
|
||||
}
|
||||
return size, nil
|
||||
}
|
||||
|
||||
// putObjectDo - executes the put object http operation.
|
||||
// NOTE: You must have WRITE permissions on a bucket to add an object to it.
|
||||
func (c Client) putObjectDo(bucketName, objectName string, reader io.Reader, md5Sum []byte, sha256Sum []byte, size int64, metaData map[string][]string) (ObjectInfo, error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return ObjectInfo{}, err
|
||||
}
|
||||
if err := s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return ObjectInfo{}, err
|
||||
}
|
||||
|
||||
// Set headers.
|
||||
customHeader := make(http.Header)
|
||||
|
||||
// Set metadata to headers
|
||||
for k, v := range metaData {
|
||||
if len(v) > 0 {
|
||||
customHeader.Set(k, v[0])
|
||||
}
|
||||
}
|
||||
|
||||
// If Content-Type is not provided, set the default application/octet-stream one
|
||||
if v, ok := metaData["Content-Type"]; !ok || len(v) == 0 {
|
||||
customHeader.Set("Content-Type", "application/octet-stream")
|
||||
}
|
||||
|
||||
// Populate request metadata.
|
||||
reqMetadata := requestMetadata{
|
||||
bucketName: bucketName,
|
||||
objectName: objectName,
|
||||
customHeader: customHeader,
|
||||
contentBody: reader,
|
||||
contentLength: size,
|
||||
contentMD5Bytes: md5Sum,
|
||||
contentSHA256Bytes: sha256Sum,
|
||||
}
|
||||
|
||||
// Execute PUT an objectName.
|
||||
resp, err := c.executeMethod("PUT", reqMetadata)
|
||||
defer closeResponse(resp)
|
||||
if err != nil {
|
||||
return ObjectInfo{}, err
|
||||
}
|
||||
if resp != nil {
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return ObjectInfo{}, httpRespToErrorResponse(resp, bucketName, objectName)
|
||||
}
|
||||
}
|
||||
|
||||
var objInfo ObjectInfo
|
||||
// Trim off the odd double quotes from ETag in the beginning and end.
|
||||
objInfo.ETag = strings.TrimPrefix(resp.Header.Get("ETag"), "\"")
|
||||
objInfo.ETag = strings.TrimSuffix(objInfo.ETag, "\"")
|
||||
// A success here means data was written to server successfully.
|
||||
objInfo.Size = size
|
||||
|
||||
// Return here.
|
||||
return objInfo, nil
|
||||
}
|
||||
@@ -18,13 +18,12 @@ package minio
|
||||
|
||||
import (
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"os"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"github.com/minio/minio-go/pkg/credentials"
|
||||
"github.com/minio/minio-go/pkg/s3utils"
|
||||
)
|
||||
|
||||
@@ -143,164 +142,77 @@ func (a completedParts) Less(i, j int) bool { return a[i].PartNumber < a[j].Part
|
||||
//
|
||||
// You must have WRITE permissions on a bucket to create an object.
|
||||
//
|
||||
// - For size smaller than 64MiB PutObject automatically does a single atomic Put operation.
|
||||
// - For size larger than 64MiB PutObject automatically does a multipart Put operation.
|
||||
// - For size input as -1 PutObject does a multipart Put operation until input stream reaches EOF.
|
||||
// Maximum object size that can be uploaded through this operation will be 5TiB.
|
||||
//
|
||||
// NOTE: Google Cloud Storage does not implement Amazon S3 Compatible multipart PUT.
|
||||
// So we fall back to single PUT operation with the maximum limit of 5GiB.
|
||||
// - For size smaller than 64MiB PutObject automatically does a
|
||||
// single atomic Put operation.
|
||||
// - For size larger than 64MiB PutObject automatically does a
|
||||
// multipart Put operation.
|
||||
// - For size input as -1 PutObject does a multipart Put operation
|
||||
// until input stream reaches EOF. Maximum object size that can
|
||||
// be uploaded through this operation will be 5TiB.
|
||||
func (c Client) PutObject(bucketName, objectName string, reader io.Reader, contentType string) (n int64, err error) {
|
||||
return c.PutObjectWithProgress(bucketName, objectName, reader, contentType, nil)
|
||||
return c.PutObjectWithMetadata(bucketName, objectName, reader, map[string][]string{
|
||||
"Content-Type": []string{contentType},
|
||||
}, nil)
|
||||
}
|
||||
|
||||
// putObjectNoChecksum special function used Google Cloud Storage. This special function
|
||||
// is used for Google Cloud Storage since Google's multipart API is not S3 compatible.
|
||||
func (c Client) putObjectNoChecksum(bucketName, objectName string, reader io.Reader, size int64, metaData map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if err := s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if size > 0 {
|
||||
readerAt, ok := reader.(io.ReaderAt)
|
||||
if ok {
|
||||
reader = io.NewSectionReader(readerAt, 0, size)
|
||||
}
|
||||
}
|
||||
// PutObjectWithSize - is a helper PutObject similar in behavior to PutObject()
|
||||
// but takes the size argument explicitly, this function avoids doing reflection
|
||||
// internally to figure out the size of input stream. Also if the input size is
|
||||
// lesser than 0 this function returns an error.
|
||||
func (c Client) PutObjectWithSize(bucketName, objectName string, reader io.Reader, readerSize int64, metadata map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
return c.putObjectCommon(bucketName, objectName, reader, readerSize, metadata, progress)
|
||||
}
|
||||
|
||||
// Update progress reader appropriately to the latest offset as we
|
||||
// read from the source.
|
||||
readSeeker := newHook(reader, progress)
|
||||
// PutObjectWithMetadata using AWS streaming signature V4
|
||||
func (c Client) PutObjectWithMetadata(bucketName, objectName string, reader io.Reader, metadata map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
return c.PutObjectWithProgress(bucketName, objectName, reader, metadata, progress)
|
||||
}
|
||||
|
||||
// This function does not calculate sha256 and md5sum for payload.
|
||||
// Execute put object.
|
||||
st, err := c.putObjectDo(bucketName, objectName, readSeeker, nil, nil, size, metaData)
|
||||
// PutObjectWithProgress using AWS streaming signature V4
|
||||
func (c Client) PutObjectWithProgress(bucketName, objectName string, reader io.Reader, metadata map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
// Size of the object.
|
||||
var size int64
|
||||
|
||||
// Get reader size.
|
||||
size, err = getReaderSize(reader)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if st.Size != size {
|
||||
return 0, ErrUnexpectedEOF(st.Size, size, bucketName, objectName)
|
||||
}
|
||||
return size, nil
|
||||
return c.putObjectCommon(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
|
||||
// putObjectSingle is a special function for uploading single put object request.
|
||||
// This special function is used as a fallback when multipart upload fails.
|
||||
func (c Client) putObjectSingle(bucketName, objectName string, reader io.Reader, size int64, metaData map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if err := s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if size > maxSinglePutObjectSize {
|
||||
return 0, ErrEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName)
|
||||
}
|
||||
// If size is a stream, upload up to 5GiB.
|
||||
if size <= -1 {
|
||||
size = maxSinglePutObjectSize
|
||||
func (c Client) putObjectCommon(bucketName, objectName string, reader io.Reader, size int64, metadata map[string][]string, progress io.Reader) (n int64, err error) {
|
||||
// Check for largest object size allowed.
|
||||
if size > int64(maxMultipartPutObjectSize) {
|
||||
return 0, ErrEntityTooLarge(size, maxMultipartPutObjectSize, bucketName, objectName)
|
||||
}
|
||||
|
||||
// Add the appropriate hash algorithms that need to be calculated by hashCopyN
|
||||
// In case of non-v4 signature request or HTTPS connection, sha256 is not needed.
|
||||
hashAlgos, hashSums := c.hashMaterials()
|
||||
|
||||
// Initialize a new temporary file.
|
||||
tmpFile, err := newTempFile("single$-putobject-single")
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer tmpFile.Close()
|
||||
|
||||
size, err = hashCopyN(hashAlgos, hashSums, tmpFile, reader, size)
|
||||
// Return error if its not io.EOF.
|
||||
if err != nil && err != io.EOF {
|
||||
return 0, err
|
||||
// NOTE: Streaming signature is not supported by GCS.
|
||||
if s3utils.IsGoogleEndpoint(c.endpointURL) {
|
||||
// Do not compute MD5 for Google Cloud Storage.
|
||||
return c.putObjectNoChecksum(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
|
||||
// Seek back to beginning of the temporary file.
|
||||
if _, err = tmpFile.Seek(0, 0); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
reader = tmpFile
|
||||
|
||||
// Execute put object.
|
||||
st, err := c.putObjectDo(bucketName, objectName, reader, hashSums["md5"], hashSums["sha256"], size, metaData)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if st.Size != size {
|
||||
return 0, ErrUnexpectedEOF(st.Size, size, bucketName, objectName)
|
||||
}
|
||||
// Progress the reader to the size if putObjectDo is successful.
|
||||
if progress != nil {
|
||||
if _, err = io.CopyN(ioutil.Discard, progress, size); err != nil {
|
||||
return size, err
|
||||
if c.overrideSignerType.IsV2() {
|
||||
if size > 0 && size < minPartSize {
|
||||
return c.putObjectNoChecksum(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
return c.putObjectMultipart(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
return size, nil
|
||||
}
|
||||
|
||||
// putObjectDo - executes the put object http operation.
|
||||
// NOTE: You must have WRITE permissions on a bucket to add an object to it.
|
||||
func (c Client) putObjectDo(bucketName, objectName string, reader io.Reader, md5Sum []byte, sha256Sum []byte, size int64, metaData map[string][]string) (ObjectInfo, error) {
|
||||
// Input validation.
|
||||
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
|
||||
return ObjectInfo{}, err
|
||||
}
|
||||
if err := s3utils.CheckValidObjectName(objectName); err != nil {
|
||||
return ObjectInfo{}, err
|
||||
}
|
||||
|
||||
// Set headers.
|
||||
customHeader := make(http.Header)
|
||||
|
||||
// Set metadata to headers
|
||||
for k, v := range metaData {
|
||||
if len(v) > 0 {
|
||||
customHeader.Set(k, v[0])
|
||||
}
|
||||
}
|
||||
|
||||
// If Content-Type is not provided, set the default application/octet-stream one
|
||||
if v, ok := metaData["Content-Type"]; !ok || len(v) == 0 {
|
||||
customHeader.Set("Content-Type", "application/octet-stream")
|
||||
}
|
||||
|
||||
// Populate request metadata.
|
||||
reqMetadata := requestMetadata{
|
||||
bucketName: bucketName,
|
||||
objectName: objectName,
|
||||
customHeader: customHeader,
|
||||
contentBody: reader,
|
||||
contentLength: size,
|
||||
contentMD5Bytes: md5Sum,
|
||||
contentSHA256Bytes: sha256Sum,
|
||||
}
|
||||
|
||||
// Execute PUT an objectName.
|
||||
resp, err := c.executeMethod("PUT", reqMetadata)
|
||||
defer closeResponse(resp)
|
||||
if err != nil {
|
||||
return ObjectInfo{}, err
|
||||
}
|
||||
if resp != nil {
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return ObjectInfo{}, httpRespToErrorResponse(resp, bucketName, objectName)
|
||||
}
|
||||
}
|
||||
|
||||
var objInfo ObjectInfo
|
||||
// Trim off the odd double quotes from ETag in the beginning and end.
|
||||
objInfo.ETag = strings.TrimPrefix(resp.Header.Get("ETag"), "\"")
|
||||
objInfo.ETag = strings.TrimSuffix(objInfo.ETag, "\"")
|
||||
// A success here means data was written to server successfully.
|
||||
objInfo.Size = size
|
||||
|
||||
// Return here.
|
||||
return objInfo, nil
|
||||
|
||||
// If size cannot be found on a stream, it is not possible
|
||||
// to upload using streaming signature, fall back to multipart.
|
||||
if size < 0 {
|
||||
return c.putObjectMultipart(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
|
||||
// Set streaming signature.
|
||||
c.overrideSignerType = credentials.SignatureV4Streaming
|
||||
|
||||
if size < minPartSize {
|
||||
return c.putObjectNoChecksum(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
|
||||
// For all sizes greater than 64MiB do multipart.
|
||||
return c.putObjectMultipartStream(bucketName, objectName, reader, size, metadata, progress)
|
||||
}
|
||||
|
||||
2
vendor/src/github.com/minio/minio-go/api.go
vendored
2
vendor/src/github.com/minio/minio-go/api.go
vendored
@@ -87,7 +87,7 @@ type Client struct {
|
||||
// Global constants.
|
||||
const (
|
||||
libraryName = "minio-go"
|
||||
libraryVersion = "2.1.0"
|
||||
libraryVersion = "3.0.0"
|
||||
)
|
||||
|
||||
// User Agent should always following the below style.
|
||||
|
||||
@@ -21,10 +21,13 @@ import (
|
||||
"errors"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"math/rand"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"reflect"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -757,18 +760,19 @@ func TestCopyObjectV2(t *testing.T) {
|
||||
len(buf), n)
|
||||
}
|
||||
|
||||
// Set copy conditions.
|
||||
copyConds := CopyConditions{}
|
||||
err = copyConds.SetModified(time.Date(2014, time.April, 0, 0, 0, 0, 0, time.UTC))
|
||||
dst, err := NewDestinationInfo(bucketName+"-copy", objectName+"-copy", nil, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
src := NewSourceInfo(bucketName, objectName, nil)
|
||||
err = src.SetModifiedSinceCond(time.Date(2014, time.April, 0, 0, 0, 0, 0, time.UTC))
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
// Copy source.
|
||||
copySource := bucketName + "/" + objectName
|
||||
|
||||
// Perform the Copy
|
||||
err = c.CopyObject(bucketName+"-copy", objectName+"-copy", copySource, copyConds)
|
||||
err = c.CopyObject(dst, src)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err, bucketName+"-copy", objectName+"-copy")
|
||||
}
|
||||
@@ -1106,3 +1110,361 @@ func TestFunctionalV2(t *testing.T) {
|
||||
t.Fatal("Error: ", err)
|
||||
}
|
||||
}
|
||||
|
||||
func testComposeObjectErrorCases(c *Client, t *testing.T) {
|
||||
// Generate a new random bucket name.
|
||||
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
|
||||
|
||||
// Make a new bucket in 'us-east-1' (source bucket).
|
||||
err := c.MakeBucket(bucketName, "us-east-1")
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err, bucketName)
|
||||
}
|
||||
|
||||
// Test that more than 10K source objects cannot be
|
||||
// concatenated.
|
||||
srcArr := [10001]SourceInfo{}
|
||||
srcSlice := srcArr[:]
|
||||
dst, err := NewDestinationInfo(bucketName, "object", nil, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := c.ComposeObject(dst, srcSlice); err == nil {
|
||||
t.Fatal("Error was expected.")
|
||||
} else if err.Error() != "There must be as least one and upto 10000 source objects." {
|
||||
t.Fatal("Got unexpected error: ", err)
|
||||
}
|
||||
|
||||
// Create a source with invalid offset spec and check that
|
||||
// error is returned:
|
||||
// 1. Create the source object.
|
||||
const badSrcSize = 5 * 1024 * 1024
|
||||
buf := bytes.Repeat([]byte("1"), badSrcSize)
|
||||
_, err = c.PutObject(bucketName, "badObject", bytes.NewReader(buf), "")
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
// 2. Set invalid range spec on the object (going beyond
|
||||
// object size)
|
||||
badSrc := NewSourceInfo(bucketName, "badObject", nil)
|
||||
err = badSrc.SetRange(1, badSrcSize)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
// 3. ComposeObject call should fail.
|
||||
if err := c.ComposeObject(dst, []SourceInfo{badSrc}); err == nil {
|
||||
t.Fatal("Error was expected.")
|
||||
} else if !strings.Contains(err.Error(), "has invalid segment-to-copy") {
|
||||
t.Fatal("Got unexpected error: ", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Test expected error cases
|
||||
func TestComposeObjectErrorCasesV2(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping functional tests for the short runs")
|
||||
}
|
||||
|
||||
// Instantiate new minio client object
|
||||
c, err := NewV2(
|
||||
os.Getenv(serverEndpoint),
|
||||
os.Getenv(accessKey),
|
||||
os.Getenv(secretKey),
|
||||
mustParseBool(os.Getenv(enableSecurity)),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
testComposeObjectErrorCases(c, t)
|
||||
}
|
||||
|
||||
func testComposeMultipleSources(c *Client, t *testing.T) {
|
||||
// Generate a new random bucket name.
|
||||
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
|
||||
// Make a new bucket in 'us-east-1' (source bucket).
|
||||
err := c.MakeBucket(bucketName, "us-east-1")
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err, bucketName)
|
||||
}
|
||||
|
||||
// Upload a small source object
|
||||
const srcSize = 1024 * 1024 * 5
|
||||
buf := bytes.Repeat([]byte("1"), srcSize)
|
||||
_, err = c.PutObject(bucketName, "srcObject", bytes.NewReader(buf), "binary/octet-stream")
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
// We will append 10 copies of the object.
|
||||
srcs := []SourceInfo{}
|
||||
for i := 0; i < 10; i++ {
|
||||
srcs = append(srcs, NewSourceInfo(bucketName, "srcObject", nil))
|
||||
}
|
||||
// make the last part very small
|
||||
err = srcs[9].SetRange(0, 0)
|
||||
if err != nil {
|
||||
t.Fatal("unexpected error:", err)
|
||||
}
|
||||
|
||||
dst, err := NewDestinationInfo(bucketName, "dstObject", nil, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = c.ComposeObject(dst, srcs)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
objProps, err := c.StatObject(bucketName, "dstObject")
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
if objProps.Size != 9*srcSize+1 {
|
||||
t.Fatal("Size mismatched! Expected:", 10000*srcSize, "but got:", objProps.Size)
|
||||
}
|
||||
}
|
||||
|
||||
// Test concatenating multiple objects objects
|
||||
func TestCompose10KSourcesV2(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping functional tests for the short runs")
|
||||
}
|
||||
|
||||
// Instantiate new minio client object
|
||||
c, err := NewV2(
|
||||
os.Getenv(serverEndpoint),
|
||||
os.Getenv(accessKey),
|
||||
os.Getenv(secretKey),
|
||||
mustParseBool(os.Getenv(enableSecurity)),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
testComposeMultipleSources(c, t)
|
||||
}
|
||||
|
||||
func testEncryptedCopyObject(c *Client, t *testing.T) {
|
||||
// Generate a new random bucket name.
|
||||
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
|
||||
// Make a new bucket in 'us-east-1' (source bucket).
|
||||
err := c.MakeBucket(bucketName, "us-east-1")
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err, bucketName)
|
||||
}
|
||||
|
||||
key1 := NewSSEInfo([]byte("32byteslongsecretkeymustbegiven1"), "AES256")
|
||||
key2 := NewSSEInfo([]byte("32byteslongsecretkeymustbegiven2"), "AES256")
|
||||
|
||||
// 1. create an sse-c encrypted object to copy by uploading
|
||||
const srcSize = 1024 * 1024
|
||||
buf := bytes.Repeat([]byte("abcde"), srcSize) // gives a buffer of 5MiB
|
||||
metadata := make(map[string][]string)
|
||||
for k, v := range key1.GetSSEHeaders() {
|
||||
metadata[k] = append(metadata[k], v)
|
||||
}
|
||||
_, err = c.PutObjectWithSize(bucketName, "srcObject", bytes.NewReader(buf), int64(len(buf)), metadata, nil)
|
||||
if err != nil {
|
||||
t.Fatal("PutObjectWithSize Error:", err)
|
||||
}
|
||||
|
||||
// 2. copy object and change encryption key
|
||||
src := NewSourceInfo(bucketName, "srcObject", &key1)
|
||||
dst, err := NewDestinationInfo(bucketName, "dstObject", &key2, nil)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
err = c.CopyObject(dst, src)
|
||||
if err != nil {
|
||||
t.Fatal("CopyObject Error:", err)
|
||||
}
|
||||
|
||||
// 3. get copied object and check if content is equal
|
||||
reqH := NewGetReqHeaders()
|
||||
for k, v := range key2.GetSSEHeaders() {
|
||||
reqH.Set(k, v)
|
||||
}
|
||||
coreClient := Core{c}
|
||||
reader, _, err := coreClient.GetObject(bucketName, "dstObject", reqH)
|
||||
if err != nil {
|
||||
t.Fatal("GetObject Error:", err)
|
||||
}
|
||||
defer reader.Close()
|
||||
|
||||
decBytes, err := ioutil.ReadAll(reader)
|
||||
if err != nil {
|
||||
log.Fatalln(err)
|
||||
}
|
||||
if !bytes.Equal(decBytes, buf) {
|
||||
log.Fatal("downloaded object mismatched for encrypted object")
|
||||
}
|
||||
}
|
||||
|
||||
// Test encrypted copy object
|
||||
func TestEncryptedCopyObjectV2(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping functional tests for the short runs")
|
||||
}
|
||||
|
||||
// Instantiate new minio client object
|
||||
c, err := NewV2(
|
||||
os.Getenv(serverEndpoint),
|
||||
os.Getenv(accessKey),
|
||||
os.Getenv(secretKey),
|
||||
mustParseBool(os.Getenv(enableSecurity)),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
testEncryptedCopyObject(c, t)
|
||||
}
|
||||
|
||||
func testUserMetadataCopying(c *Client, t *testing.T) {
|
||||
// Generate a new random bucket name.
|
||||
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
|
||||
// Make a new bucket in 'us-east-1' (source bucket).
|
||||
err := c.MakeBucket(bucketName, "us-east-1")
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err, bucketName)
|
||||
}
|
||||
|
||||
fetchMeta := func(object string) (h http.Header) {
|
||||
objInfo, err := c.StatObject(bucketName, object)
|
||||
if err != nil {
|
||||
t.Fatal("Metadata fetch error:", err)
|
||||
}
|
||||
h = make(http.Header)
|
||||
for k, vs := range objInfo.Metadata {
|
||||
if strings.HasPrefix(strings.ToLower(k), "x-amz-meta-") {
|
||||
for _, v := range vs {
|
||||
h.Add(k, v)
|
||||
}
|
||||
}
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
// 1. create a client encrypted object to copy by uploading
|
||||
const srcSize = 1024 * 1024
|
||||
buf := bytes.Repeat([]byte("abcde"), srcSize) // gives a buffer of 5MiB
|
||||
metadata := make(http.Header)
|
||||
metadata.Set("x-amz-meta-myheader", "myvalue")
|
||||
_, err = c.PutObjectWithMetadata(bucketName, "srcObject",
|
||||
bytes.NewReader(buf), metadata, nil)
|
||||
if err != nil {
|
||||
t.Fatal("Put Error:", err)
|
||||
}
|
||||
if !reflect.DeepEqual(metadata, fetchMeta("srcObject")) {
|
||||
t.Fatal("Unequal metadata")
|
||||
}
|
||||
|
||||
// 2. create source
|
||||
src := NewSourceInfo(bucketName, "srcObject", nil)
|
||||
// 2.1 create destination with metadata set
|
||||
dst1, err := NewDestinationInfo(bucketName, "dstObject-1", nil, map[string]string{"notmyheader": "notmyvalue"})
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
// 3. Check that copying to an object with metadata set resets
|
||||
// the headers on the copy.
|
||||
err = c.CopyObject(dst1, src)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
expectedHeaders := make(http.Header)
|
||||
expectedHeaders.Set("x-amz-meta-notmyheader", "notmyvalue")
|
||||
if !reflect.DeepEqual(expectedHeaders, fetchMeta("dstObject-1")) {
|
||||
t.Fatal("Unequal metadata")
|
||||
}
|
||||
|
||||
// 4. create destination with no metadata set and same source
|
||||
dst2, err := NewDestinationInfo(bucketName, "dstObject-2", nil, nil)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
|
||||
}
|
||||
src = NewSourceInfo(bucketName, "srcObject", nil)
|
||||
|
||||
// 5. Check that copying to an object with no metadata set,
|
||||
// copies metadata.
|
||||
err = c.CopyObject(dst2, src)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
expectedHeaders = metadata
|
||||
if !reflect.DeepEqual(expectedHeaders, fetchMeta("dstObject-2")) {
|
||||
t.Fatal("Unequal metadata")
|
||||
}
|
||||
|
||||
// 6. Compose a pair of sources.
|
||||
srcs := []SourceInfo{
|
||||
NewSourceInfo(bucketName, "srcObject", nil),
|
||||
NewSourceInfo(bucketName, "srcObject", nil),
|
||||
}
|
||||
dst3, err := NewDestinationInfo(bucketName, "dstObject-3", nil, nil)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
|
||||
}
|
||||
|
||||
err = c.ComposeObject(dst3, srcs)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
// Check that no headers are copied in this case
|
||||
if !reflect.DeepEqual(make(http.Header), fetchMeta("dstObject-3")) {
|
||||
t.Fatal("Unequal metadata")
|
||||
}
|
||||
|
||||
// 7. Compose a pair of sources with dest user metadata set.
|
||||
srcs = []SourceInfo{
|
||||
NewSourceInfo(bucketName, "srcObject", nil),
|
||||
NewSourceInfo(bucketName, "srcObject", nil),
|
||||
}
|
||||
dst4, err := NewDestinationInfo(bucketName, "dstObject-4", nil, map[string]string{"notmyheader": "notmyvalue"})
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
|
||||
}
|
||||
|
||||
err = c.ComposeObject(dst4, srcs)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
// Check that no headers are copied in this case
|
||||
expectedHeaders = make(http.Header)
|
||||
expectedHeaders.Set("x-amz-meta-notmyheader", "notmyvalue")
|
||||
if !reflect.DeepEqual(expectedHeaders, fetchMeta("dstObject-4")) {
|
||||
t.Fatal("Unequal metadata")
|
||||
}
|
||||
}
|
||||
|
||||
func TestUserMetadataCopyingV2(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping functional tests for the short runs")
|
||||
}
|
||||
|
||||
// Instantiate new minio client object
|
||||
c, err := NewV2(
|
||||
os.Getenv(serverEndpoint),
|
||||
os.Getenv(accessKey),
|
||||
os.Getenv(secretKey),
|
||||
mustParseBool(os.Getenv(enableSecurity)),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
// c.TraceOn(os.Stderr)
|
||||
testUserMetadataCopying(c, t)
|
||||
}
|
||||
|
||||
@@ -112,14 +112,14 @@ func TestMakeBucketError(t *testing.T) {
|
||||
t.Fatal("Error:", err, bucketName+"..-1")
|
||||
}
|
||||
// Verify valid error response.
|
||||
if ToErrorResponse(err).Code != "InvalidBucketName" {
|
||||
if err != nil && err.Error() != "Bucket name contains invalid characters" {
|
||||
t.Fatal("Error: Invalid error returned by server", err)
|
||||
}
|
||||
if err = c.MakeBucket(bucketName+"AAA-1", "eu-central-1"); err == nil {
|
||||
t.Fatal("Error:", err, bucketName+"..-1")
|
||||
}
|
||||
// Verify valid error response.
|
||||
if ToErrorResponse(err).Code != "InvalidBucketName" {
|
||||
if err != nil && err.Error() != "Bucket name contains invalid characters" {
|
||||
t.Fatal("Error: Invalid error returned by server", err)
|
||||
}
|
||||
}
|
||||
@@ -316,7 +316,9 @@ func TestPutObjectWithMetadata(t *testing.T) {
|
||||
// Object custom metadata
|
||||
customContentType := "custom/contenttype"
|
||||
|
||||
n, err := c.PutObjectWithMetadata(bucketName, objectName, bytes.NewReader(buf), map[string][]string{"Content-Type": {customContentType}}, nil)
|
||||
n, err := c.PutObjectWithMetadata(bucketName, objectName, bytes.NewReader(buf), map[string][]string{
|
||||
"Content-Type": {customContentType},
|
||||
}, nil)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err, bucketName, objectName)
|
||||
}
|
||||
@@ -424,8 +426,8 @@ func TestPutObjectStreaming(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// Test listing partially uploaded objects.
|
||||
func TestListPartiallyUploaded(t *testing.T) {
|
||||
// Test listing no partially uploaded objects upon putObject error.
|
||||
func TestListNoPartiallyUploadedObjects(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping function tests for short runs")
|
||||
}
|
||||
@@ -480,18 +482,25 @@ func TestListPartiallyUploaded(t *testing.T) {
|
||||
if err == nil {
|
||||
t.Fatal("Error: PutObject should fail.")
|
||||
}
|
||||
if err.Error() != "proactively closed to be verified later" {
|
||||
if !strings.Contains(err.Error(), "proactively closed to be verified later") {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
doneCh := make(chan struct{})
|
||||
defer close(doneCh)
|
||||
|
||||
isRecursive := true
|
||||
multiPartObjectCh := c.ListIncompleteUploads(bucketName, objectName, isRecursive, doneCh)
|
||||
|
||||
var activeUploads bool
|
||||
for multiPartObject := range multiPartObjectCh {
|
||||
if multiPartObject.Err != nil {
|
||||
t.Fatalf("Error: Error when listing incomplete upload")
|
||||
}
|
||||
activeUploads = true
|
||||
}
|
||||
if activeUploads {
|
||||
t.Errorf("There should be no active uploads in progress upon error for %s/%s", bucketName, objectName)
|
||||
}
|
||||
|
||||
err = c.RemoveBucket(bucketName)
|
||||
@@ -750,75 +759,6 @@ func TestRemoveMultipleObjects(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// Tests removing partially uploaded objects.
|
||||
func TestRemovePartiallyUploaded(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping function tests for short runs")
|
||||
}
|
||||
|
||||
// Seed random based on current time.
|
||||
rand.Seed(time.Now().Unix())
|
||||
|
||||
// Instantiate new minio client object.
|
||||
c, err := New(
|
||||
os.Getenv(serverEndpoint),
|
||||
os.Getenv(accessKey),
|
||||
os.Getenv(secretKey),
|
||||
mustParseBool(os.Getenv(enableSecurity)),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
// Set user agent.
|
||||
c.SetAppInfo("Minio-go-FunctionalTest", "0.1.0")
|
||||
|
||||
// Enable tracing, write to stdout.
|
||||
// c.TraceOn(os.Stderr)
|
||||
|
||||
// Generate a new random bucket name.
|
||||
bucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test")
|
||||
|
||||
// Make a new bucket.
|
||||
err = c.MakeBucket(bucketName, "us-east-1")
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err, bucketName)
|
||||
}
|
||||
|
||||
r := bytes.NewReader(bytes.Repeat([]byte("a"), 128*1024))
|
||||
|
||||
reader, writer := io.Pipe()
|
||||
go func() {
|
||||
i := 0
|
||||
for i < 25 {
|
||||
_, cerr := io.CopyN(writer, r, 128*1024)
|
||||
if cerr != nil {
|
||||
t.Fatal("Error:", cerr, bucketName)
|
||||
}
|
||||
i++
|
||||
r.Seek(0, 0)
|
||||
}
|
||||
writer.CloseWithError(errors.New("proactively closed to be verified later"))
|
||||
}()
|
||||
|
||||
objectName := bucketName + "-resumable"
|
||||
_, err = c.PutObject(bucketName, objectName, reader, "application/octet-stream")
|
||||
if err == nil {
|
||||
t.Fatal("Error: PutObject should fail.")
|
||||
}
|
||||
if err.Error() != "proactively closed to be verified later" {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
err = c.RemoveIncompleteUpload(bucketName, objectName)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
err = c.RemoveBucket(bucketName)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Tests FPutObject of a big file to trigger multipart
|
||||
func TestFPutObjectMultipart(t *testing.T) {
|
||||
if testing.Short() {
|
||||
@@ -1540,41 +1480,45 @@ func TestCopyObject(t *testing.T) {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
// Copy Source
|
||||
src := NewSourceInfo(bucketName, objectName, nil)
|
||||
|
||||
// Set copy conditions.
|
||||
copyConds := CopyConditions{}
|
||||
|
||||
// Start by setting wrong conditions
|
||||
err = copyConds.SetModified(time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC))
|
||||
// All invalid conditions first.
|
||||
err = src.SetModifiedSinceCond(time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC))
|
||||
if err == nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
err = copyConds.SetUnmodified(time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC))
|
||||
err = src.SetUnmodifiedSinceCond(time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC))
|
||||
if err == nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
err = copyConds.SetMatchETag("")
|
||||
err = src.SetMatchETagCond("")
|
||||
if err == nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
err = copyConds.SetMatchETagExcept("")
|
||||
err = src.SetMatchETagExceptCond("")
|
||||
if err == nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
err = copyConds.SetModified(time.Date(2014, time.April, 0, 0, 0, 0, 0, time.UTC))
|
||||
err = src.SetModifiedSinceCond(time.Date(2014, time.April, 0, 0, 0, 0, 0, time.UTC))
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
err = copyConds.SetMatchETag(objInfo.ETag)
|
||||
err = src.SetMatchETagCond(objInfo.ETag)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
// Copy source.
|
||||
copySource := bucketName + "/" + objectName
|
||||
dst, err := NewDestinationInfo(bucketName+"-copy", objectName+"-copy", nil, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Perform the Copy
|
||||
err = c.CopyObject(bucketName+"-copy", objectName+"-copy", copySource, copyConds)
|
||||
err = c.CopyObject(dst, src)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err, bucketName+"-copy", objectName+"-copy")
|
||||
}
|
||||
@@ -1604,18 +1548,18 @@ func TestCopyObject(t *testing.T) {
|
||||
}
|
||||
|
||||
// CopyObject again but with wrong conditions
|
||||
copyConds = CopyConditions{}
|
||||
err = copyConds.SetUnmodified(time.Date(2014, time.April, 0, 0, 0, 0, 0, time.UTC))
|
||||
src = NewSourceInfo(bucketName, objectName, nil)
|
||||
err = src.SetUnmodifiedSinceCond(time.Date(2014, time.April, 0, 0, 0, 0, 0, time.UTC))
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
err = copyConds.SetMatchETagExcept(objInfo.ETag)
|
||||
err = src.SetMatchETagExceptCond(objInfo.ETag)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
// Perform the Copy which should fail
|
||||
err = c.CopyObject(bucketName+"-copy", objectName+"-copy", copySource, copyConds)
|
||||
err = c.CopyObject(dst, src)
|
||||
if err == nil {
|
||||
t.Fatal("Error:", err, bucketName+"-copy", objectName+"-copy should fail")
|
||||
}
|
||||
@@ -2383,3 +2327,84 @@ func TestPutObjectUploadSeekedObject(t *testing.T) {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Test expected error cases
|
||||
func TestComposeObjectErrorCases(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping functional tests for the short runs")
|
||||
}
|
||||
|
||||
// Instantiate new minio client object
|
||||
c, err := NewV4(
|
||||
os.Getenv(serverEndpoint),
|
||||
os.Getenv(accessKey),
|
||||
os.Getenv(secretKey),
|
||||
mustParseBool(os.Getenv(enableSecurity)),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
testComposeObjectErrorCases(c, t)
|
||||
}
|
||||
|
||||
// Test concatenating 10K objects
|
||||
func TestCompose10KSources(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping functional tests for the short runs")
|
||||
}
|
||||
|
||||
// Instantiate new minio client object
|
||||
c, err := NewV4(
|
||||
os.Getenv(serverEndpoint),
|
||||
os.Getenv(accessKey),
|
||||
os.Getenv(secretKey),
|
||||
mustParseBool(os.Getenv(enableSecurity)),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
testComposeMultipleSources(c, t)
|
||||
}
|
||||
|
||||
// Test encrypted copy object
|
||||
func TestEncryptedCopyObject(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping functional tests for the short runs")
|
||||
}
|
||||
|
||||
// Instantiate new minio client object
|
||||
c, err := NewV4(
|
||||
os.Getenv(serverEndpoint),
|
||||
os.Getenv(accessKey),
|
||||
os.Getenv(secretKey),
|
||||
mustParseBool(os.Getenv(enableSecurity)),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
// c.TraceOn(os.Stderr)
|
||||
testEncryptedCopyObject(c, t)
|
||||
}
|
||||
|
||||
func TestUserMetadataCopying(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping functional tests for the short runs")
|
||||
}
|
||||
|
||||
// Instantiate new minio client object
|
||||
c, err := NewV4(
|
||||
os.Getenv(serverEndpoint),
|
||||
os.Getenv(accessKey),
|
||||
os.Getenv(secretKey),
|
||||
mustParseBool(os.Getenv(enableSecurity)),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
|
||||
// c.TraceOn(os.Stderr)
|
||||
testUserMetadataCopying(c, t)
|
||||
}
|
||||
|
||||
@@ -182,27 +182,6 @@ func TestValidBucketLocation(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// Tests temp file.
|
||||
func TestTempFile(t *testing.T) {
|
||||
tmpFile, err := newTempFile("testing")
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
fileName := tmpFile.Name()
|
||||
// Closing temporary file purges the file.
|
||||
err = tmpFile.Close()
|
||||
if err != nil {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
st, err := os.Stat(fileName)
|
||||
if err != nil && !os.IsNotExist(err) {
|
||||
t.Fatal("Error:", err)
|
||||
}
|
||||
if err == nil && st != nil {
|
||||
t.Fatal("Error: file should be deleted and should not exist.")
|
||||
}
|
||||
}
|
||||
|
||||
// Tests error response structure.
|
||||
func TestErrorResponse(t *testing.T) {
|
||||
var err error
|
||||
|
||||
@@ -18,10 +18,18 @@ package minio
|
||||
|
||||
/// Multipart upload defaults.
|
||||
|
||||
// miniPartSize - minimum part size 64MiB per object after which
|
||||
// absMinPartSize - absolute minimum part size (5 MiB) below which
|
||||
// a part in a multipart upload may not be uploaded.
|
||||
const absMinPartSize = 1024 * 1024 * 5
|
||||
|
||||
// minPartSize - minimum part size 64MiB per object after which
|
||||
// putObject behaves internally as multipart.
|
||||
const minPartSize = 1024 * 1024 * 64
|
||||
|
||||
// copyPartSize - default (and maximum) part size to copy in a
|
||||
// copy-object request (5GiB)
|
||||
const copyPartSize = 1024 * 1024 * 1024 * 5
|
||||
|
||||
// maxPartsCount - maximum number of parts for a single multipart session.
|
||||
const maxPartsCount = 10000
|
||||
|
||||
@@ -37,10 +45,6 @@ const maxSinglePutObjectSize = 1024 * 1024 * 1024 * 5
|
||||
// Multipart operation.
|
||||
const maxMultipartPutObjectSize = 1024 * 1024 * 1024 * 1024 * 5
|
||||
|
||||
// optimalReadBufferSize - optimal buffer 5MiB used for reading
|
||||
// through Read operation.
|
||||
const optimalReadBufferSize = 1024 * 1024 * 5
|
||||
|
||||
// unsignedPayload - value to be set to X-Amz-Content-Sha256 header when
|
||||
// we don't want to sign the request payload
|
||||
const unsignedPayload = "UNSIGNED-PAYLOAD"
|
||||
|
||||
@@ -1,99 +0,0 @@
|
||||
/*
|
||||
* Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2016 Minio, Inc.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package minio
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"time"
|
||||
)
|
||||
|
||||
// copyCondition explanation:
|
||||
// http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
|
||||
//
|
||||
// Example:
|
||||
//
|
||||
// copyCondition {
|
||||
// key: "x-amz-copy-if-modified-since",
|
||||
// value: "Tue, 15 Nov 1994 12:45:26 GMT",
|
||||
// }
|
||||
//
|
||||
type copyCondition struct {
|
||||
key string
|
||||
value string
|
||||
}
|
||||
|
||||
// CopyConditions - copy conditions.
|
||||
type CopyConditions struct {
|
||||
conditions []copyCondition
|
||||
}
|
||||
|
||||
// NewCopyConditions - Instantiate new list of conditions. This
|
||||
// function is left behind for backward compatibility. The idiomatic
|
||||
// way to set an empty set of copy conditions is,
|
||||
// ``copyConditions := CopyConditions{}``.
|
||||
//
|
||||
func NewCopyConditions() CopyConditions {
|
||||
return CopyConditions{}
|
||||
}
|
||||
|
||||
// SetMatchETag - set match etag.
|
||||
func (c *CopyConditions) SetMatchETag(etag string) error {
|
||||
if etag == "" {
|
||||
return ErrInvalidArgument("ETag cannot be empty.")
|
||||
}
|
||||
c.conditions = append(c.conditions, copyCondition{
|
||||
key: "x-amz-copy-source-if-match",
|
||||
value: etag,
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetMatchETagExcept - set match etag except.
|
||||
func (c *CopyConditions) SetMatchETagExcept(etag string) error {
|
||||
if etag == "" {
|
||||
return ErrInvalidArgument("ETag cannot be empty.")
|
||||
}
|
||||
c.conditions = append(c.conditions, copyCondition{
|
||||
key: "x-amz-copy-source-if-none-match",
|
||||
value: etag,
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetUnmodified - set unmodified time since.
|
||||
func (c *CopyConditions) SetUnmodified(modTime time.Time) error {
|
||||
if modTime.IsZero() {
|
||||
return ErrInvalidArgument("Modified since cannot be empty.")
|
||||
}
|
||||
c.conditions = append(c.conditions, copyCondition{
|
||||
key: "x-amz-copy-source-if-unmodified-since",
|
||||
value: modTime.Format(http.TimeFormat),
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetModified - set modified time since.
|
||||
func (c *CopyConditions) SetModified(modTime time.Time) error {
|
||||
if modTime.IsZero() {
|
||||
return ErrInvalidArgument("Modified since cannot be empty.")
|
||||
}
|
||||
c.conditions = append(c.conditions, copyCondition{
|
||||
key: "x-amz-copy-source-if-modified-since",
|
||||
value: modTime.Format(http.TimeFormat),
|
||||
})
|
||||
return nil
|
||||
}
|
||||
12
vendor/src/github.com/minio/minio-go/core.go
vendored
12
vendor/src/github.com/minio/minio-go/core.go
vendored
@@ -70,7 +70,13 @@ func (c Core) ListMultipartUploads(bucket, prefix, keyMarker, uploadIDMarker, de
|
||||
|
||||
// PutObjectPart - Upload an object part.
|
||||
func (c Core) PutObjectPart(bucket, object, uploadID string, partID int, size int64, data io.Reader, md5Sum, sha256Sum []byte) (ObjectPart, error) {
|
||||
return c.uploadPart(bucket, object, uploadID, data, partID, md5Sum, sha256Sum, size)
|
||||
return c.PutObjectPartWithMetadata(bucket, object, uploadID, partID, size, data, md5Sum, sha256Sum, nil)
|
||||
}
|
||||
|
||||
// PutObjectPartWithMetadata - upload an object part with additional request metadata.
|
||||
func (c Core) PutObjectPartWithMetadata(bucket, object, uploadID string, partID int,
|
||||
size int64, data io.Reader, md5Sum, sha256Sum []byte, metadata map[string][]string) (ObjectPart, error) {
|
||||
return c.uploadPart(bucket, object, uploadID, data, partID, md5Sum, sha256Sum, size, metadata)
|
||||
}
|
||||
|
||||
// ListObjectParts - List uploaded parts of an incomplete upload.x
|
||||
@@ -80,7 +86,9 @@ func (c Core) ListObjectParts(bucket, object, uploadID string, partNumberMarker
|
||||
|
||||
// CompleteMultipartUpload - Concatenate uploaded parts and commit to an object.
|
||||
func (c Core) CompleteMultipartUpload(bucket, object, uploadID string, parts []CompletePart) error {
|
||||
_, err := c.completeMultipartUpload(bucket, object, uploadID, completeMultipartUpload{Parts: parts})
|
||||
_, err := c.completeMultipartUpload(bucket, object, uploadID, completeMultipartUpload{
|
||||
Parts: parts,
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
201
vendor/src/github.com/minio/minio-go/docs/API.md
vendored
201
vendor/src/github.com/minio/minio-go/docs/API.md
vendored
@@ -50,17 +50,21 @@ func main() {
|
||||
}
|
||||
```
|
||||
|
||||
| Bucket operations |Object operations | Encrypted Object operations | Presigned operations | Bucket Policy/Notification Operations | Client custom settings |
|
||||
|:---|:---|:---|:---|:---|:---|
|
||||
|[`MakeBucket`](#MakeBucket) |[`GetObject`](#GetObject) | [`NewSymmetricKey`](#NewSymmetricKey) | [`PresignedGetObject`](#PresignedGetObject) |[`SetBucketPolicy`](#SetBucketPolicy) | [`SetAppInfo`](#SetAppInfo) |
|
||||
|[`ListBuckets`](#ListBuckets) |[`PutObject`](#PutObject) | [`NewAsymmetricKey`](#NewAsymmetricKey) |[`PresignedPutObject`](#PresignedPutObject) | [`GetBucketPolicy`](#GetBucketPolicy) | [`SetCustomTransport`](#SetCustomTransport) |
|
||||
|[`BucketExists`](#BucketExists) |[`CopyObject`](#CopyObject) | [`GetEncryptedObject`](#GetEncryptedObject) |[`PresignedPostPolicy`](#PresignedPostPolicy) | [`ListBucketPolicies`](#ListBucketPolicies) | [`TraceOn`](#TraceOn) |
|
||||
| [`RemoveBucket`](#RemoveBucket) |[`StatObject`](#StatObject) | [`PutObjectStreaming`](#PutObjectStreaming) | | [`SetBucketNotification`](#SetBucketNotification) | [`TraceOff`](#TraceOff) |
|
||||
|[`ListObjects`](#ListObjects) |[`RemoveObject`](#RemoveObject) | [`PutEncryptedObject`](#PutEncryptedObject) | | [`GetBucketNotification`](#GetBucketNotification) | [`SetS3TransferAccelerate`](#SetS3TransferAccelerate) |
|
||||
|[`ListObjectsV2`](#ListObjectsV2) | [`RemoveObjects`](#RemoveObjects) | | | [`RemoveAllBucketNotification`](#RemoveAllBucketNotification) |
|
||||
|[`ListIncompleteUploads`](#ListIncompleteUploads) | [`RemoveIncompleteUpload`](#RemoveIncompleteUpload) | | | [`ListenBucketNotification`](#ListenBucketNotification) |
|
||||
| | [`FPutObject`](#FPutObject) | | | |
|
||||
| | [`FGetObject`](#FGetObject) | | | |
|
||||
| Bucket operations | Object operations | Encrypted Object operations | Presigned operations | Bucket Policy/Notification Operations | Client custom settings |
|
||||
| :--- | :--- | :--- | :--- | :--- | :--- |
|
||||
| [`MakeBucket`](#MakeBucket) | [`GetObject`](#GetObject) | [`NewSymmetricKey`](#NewSymmetricKey) | [`PresignedGetObject`](#PresignedGetObject) | [`SetBucketPolicy`](#SetBucketPolicy) | [`SetAppInfo`](#SetAppInfo) |
|
||||
| [`ListBuckets`](#ListBuckets) | [`PutObject`](#PutObject) | [`NewAsymmetricKey`](#NewAsymmetricKey) | [`PresignedPutObject`](#PresignedPutObject) | [`GetBucketPolicy`](#GetBucketPolicy) | [`SetCustomTransport`](#SetCustomTransport) |
|
||||
| [`BucketExists`](#BucketExists) | [`CopyObject`](#CopyObject) | [`GetEncryptedObject`](#GetEncryptedObject) | [`PresignedPostPolicy`](#PresignedPostPolicy) | [`ListBucketPolicies`](#ListBucketPolicies) | [`TraceOn`](#TraceOn) |
|
||||
| [`RemoveBucket`](#RemoveBucket) | [`StatObject`](#StatObject) | [`PutObjectStreaming`](#PutObjectStreaming) | | [`SetBucketNotification`](#SetBucketNotification) | [`TraceOff`](#TraceOff) |
|
||||
| [`ListObjects`](#ListObjects) | [`RemoveObject`](#RemoveObject) | [`PutEncryptedObject`](#PutEncryptedObject) | | [`GetBucketNotification`](#GetBucketNotification) | [`SetS3TransferAccelerate`](#SetS3TransferAccelerate) |
|
||||
| [`ListObjectsV2`](#ListObjectsV2) | [`RemoveObjects`](#RemoveObjects) | [`NewSSEInfo`](#NewSSEInfo) | | [`RemoveAllBucketNotification`](#RemoveAllBucketNotification) | |
|
||||
| [`ListIncompleteUploads`](#ListIncompleteUploads) | [`RemoveIncompleteUpload`](#RemoveIncompleteUpload) | | | [`ListenBucketNotification`](#ListenBucketNotification) | |
|
||||
| | [`FPutObject`](#FPutObject) | | | | |
|
||||
| | [`FGetObject`](#FGetObject) | | | | |
|
||||
| | [`ComposeObject`](#ComposeObject) | | | | |
|
||||
| | [`NewSourceInfo`](#NewSourceInfo) | | | | |
|
||||
| | [`NewDestinationInfo`](#NewDestinationInfo) | | | | |
|
||||
|
||||
|
||||
## 1. Constructor
|
||||
<a name="Minio"></a>
|
||||
@@ -502,9 +506,11 @@ if err != nil {
|
||||
|
||||
|
||||
<a name="CopyObject"></a>
|
||||
### CopyObject(bucketName, objectName, objectSource string, conditions CopyConditions) error
|
||||
### CopyObject(dst DestinationInfo, src SourceInfo) error
|
||||
|
||||
Copy a source object into a new object with the provided name in the provided bucket.
|
||||
Create or replace an object through server-side copying of an existing object. It supports conditional copying, copying a part of an object and server-side encryption of destination and decryption of source. See the `SourceInfo` and `DestinationInfo` types for further details.
|
||||
|
||||
To copy multiple source objects into a single destination object see the `ComposeObject` API.
|
||||
|
||||
|
||||
__Parameters__
|
||||
@@ -512,50 +518,161 @@ __Parameters__
|
||||
|
||||
|Param |Type |Description |
|
||||
|:---|:---| :---|
|
||||
|`bucketName` | _string_ |Name of the bucket |
|
||||
|`objectName` | _string_ |Name of the object |
|
||||
|`objectSource` | _string_ |Name of the source object |
|
||||
|`conditions` | _CopyConditions_ |Collection of supported CopyObject conditions. [`x-amz-copy-source`, `x-amz-copy-source-if-match`, `x-amz-copy-source-if-none-match`, `x-amz-copy-source-if-unmodified-since`, `x-amz-copy-source-if-modified-since`]|
|
||||
|`dst` | _DestinationInfo_ |Argument describing the destination object |
|
||||
|`src` | _SourceInfo_ |Argument describing the source object |
|
||||
|
||||
|
||||
__Example__
|
||||
|
||||
|
||||
```go
|
||||
// Use-case-1
|
||||
// To copy an existing object to a new object with _no_ copy conditions.
|
||||
copyConds := minio.CopyConditions{}
|
||||
err := minioClient.CopyObject("mybucket", "myobject", "my-sourcebucketname/my-sourceobjectname", copyConds)
|
||||
// Use-case 1: Simple copy object with no conditions, etc
|
||||
// Source object
|
||||
src := minio.NewSourceInfo("my-sourcebucketname", "my-sourceobjectname", nil)
|
||||
|
||||
// Destination object
|
||||
dst := minio.NewDestinationInfo("my-bucketname", "my-objectname", nil, nil)
|
||||
|
||||
/ Copy object call
|
||||
err = s3Client.CopyObject(dst, src)
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
return
|
||||
}
|
||||
|
||||
// Use-case-2
|
||||
// To copy an existing object to a new object with the following copy conditions
|
||||
// Use-case 2: Copy object with copy-conditions, and copying only part of the source object.
|
||||
// 1. that matches a given ETag
|
||||
// 2. and modified after 1st April 2014
|
||||
// 3. but unmodified since 23rd April 2014
|
||||
// 4. copy only first 1MiB of object.
|
||||
|
||||
// Initialize empty copy conditions.
|
||||
var copyConds = minio.CopyConditions{}
|
||||
// Source object
|
||||
src := minio.NewSourceInfo("my-sourcebucketname", "my-sourceobjectname", nil)
|
||||
|
||||
// copy object that matches the given ETag.
|
||||
copyConds.SetMatchETag("31624deb84149d2f8ef9c385918b653a")
|
||||
// Set matching ETag condition, copy object which matches the following ETag.
|
||||
src.SetMatchETagCond("31624deb84149d2f8ef9c385918b653a")
|
||||
|
||||
// and modified after 1st April 2014
|
||||
copyConds.SetModified(time.Date(2014, time.April, 1, 0, 0, 0, 0, time.UTC))
|
||||
// Set modified condition, copy object modified since 2014 April 1.
|
||||
src.SetModifiedSinceCond(time.Date(2014, time.April, 1, 0, 0, 0, 0, time.UTC))
|
||||
|
||||
// but unmodified since 23rd April 2014
|
||||
copyConds.SetUnmodified(time.Date(2014, time.April, 23, 0, 0, 0, 0, time.UTC))
|
||||
// Set unmodified condition, copy object unmodified since 2014 April 23.
|
||||
src.SetUnmodifiedSinceCond(time.Date(2014, time.April, 23, 0, 0, 0, 0, time.UTC))
|
||||
|
||||
err := minioClient.CopyObject("mybucket", "myobject", "my-sourcebucketname/my-sourceobjectname", copyConds)
|
||||
// Set copy-range of only first 1MiB of file.
|
||||
src.SetRange(0, 1024*1024-1)
|
||||
|
||||
// Destination object
|
||||
dst := minio.NewDestinationInfo("my-bucketname", "my-objectname", nil, nil)
|
||||
|
||||
/ Copy object call
|
||||
err = s3Client.CopyObject(dst, src)
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
return
|
||||
}
|
||||
```
|
||||
|
||||
<a name="ComposeObject"></a>
|
||||
### ComposeObject(dst DestinationInfo, srcs []SourceInfo) error
|
||||
|
||||
Create an object by concatenating a list of source objects using
|
||||
server-side copying.
|
||||
|
||||
__Parameters__
|
||||
|
||||
|
||||
|Param |Type |Description |
|
||||
|:---|:---|:---|
|
||||
|`dst` | _minio.DestinationInfo_ |Struct with info about the object to be created. |
|
||||
|`srcs` | _[]minio.SourceInfo_ |Slice of struct with info about source objects to be concatenated in order. |
|
||||
|
||||
|
||||
__Example__
|
||||
|
||||
|
||||
```go
|
||||
// Prepare source decryption key (here we assume same key to
|
||||
// decrypt all source objects.)
|
||||
decKey := minio.NewSSEInfo([]byte{1, 2, 3}, "")
|
||||
|
||||
// Source objects to concatenate. We also specify decryption
|
||||
// key for each
|
||||
src1 := minio.NewSourceInfo("bucket1", "object1", decKey)
|
||||
src1.SetMatchETag("31624deb84149d2f8ef9c385918b653a")
|
||||
|
||||
src2 := minio.NewSourceInfo("bucket2", "object2", decKey)
|
||||
src2.SetMatchETag("f8ef9c385918b653a31624deb84149d2")
|
||||
|
||||
src3 := minio.NewSourceInfo("bucket3", "object3", decKey)
|
||||
src3.SetMatchETag("5918b653a31624deb84149d2f8ef9c38")
|
||||
|
||||
// Create slice of sources.
|
||||
srcs := []minio.SourceInfo{src1, src2, src3}
|
||||
|
||||
// Prepare destination encryption key
|
||||
encKey := minio.NewSSEInfo([]byte{8, 9, 0}, "")
|
||||
|
||||
// Create destination info
|
||||
dst := minio.NewDestinationInfo("bucket", "object", encKey, nil)
|
||||
err = s3Client.ComposeObject(dst, srcs)
|
||||
if err != nil {
|
||||
log.Println(err)
|
||||
return
|
||||
}
|
||||
|
||||
log.Println("Composed object successfully.")
|
||||
```
|
||||
|
||||
<a name="NewSourceInfo"></a>
|
||||
### NewSourceInfo(bucket, object string, decryptSSEC *SSEInfo) SourceInfo
|
||||
|
||||
Construct a `SourceInfo` object that can be used as the source for server-side copying operations like `CopyObject` and `ComposeObject`. This object can be used to set copy-conditions on the source.
|
||||
|
||||
__Parameters__
|
||||
|
||||
| Param | Type | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| `bucket` | _string_ | Name of the source bucket |
|
||||
| `object` | _string_ | Name of the source object |
|
||||
| `decryptSSEC` | _*minio.SSEInfo_ | Decryption info for the source object (`nil` without encryption) |
|
||||
|
||||
__Example__
|
||||
|
||||
``` go
|
||||
// No decryption parameter.
|
||||
src := NewSourceInfo("bucket", "object", nil)
|
||||
|
||||
// With decryption parameter.
|
||||
decKey := NewSSEKey([]byte{1,2,3}, "")
|
||||
src := NewSourceInfo("bucket", "object", decKey)
|
||||
```
|
||||
|
||||
<a name="NewDestinationInfo"></a>
|
||||
### NewDestinationInfo(bucket, object string, encryptSSEC *SSEInfo, userMeta map[string]string) DestinationInfo
|
||||
|
||||
Construct a `DestinationInfo` object that can be used as the destination object for server-side copying operations like `CopyObject` and `ComposeObject`.
|
||||
|
||||
__Parameters__
|
||||
|
||||
| Param | Type | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| `bucket` | _string_ | Name of the destination bucket |
|
||||
| `object` | _string_ | Name of the destination object |
|
||||
| `encryptSSEC` | _*minio.SSEInfo_ | Encryption info for the source object (`nil` without encryption) |
|
||||
| `userMeta` | _map[string]string_ | User metadata to be set on the destination. If nil, with only one source, user-metadata is copied from source. |
|
||||
|
||||
__Example__
|
||||
|
||||
``` go
|
||||
// No encryption parameter.
|
||||
src := NewDestinationInfo("bucket", "object", nil, nil)
|
||||
|
||||
// With encryption parameter.
|
||||
encKey := NewSSEKey([]byte{1,2,3}, "")
|
||||
src := NewDecryptionInfo("bucket", "object", encKey, nil)
|
||||
```
|
||||
|
||||
|
||||
<a name="FPutObject"></a>
|
||||
### FPutObject(bucketName, objectName, filePath, contentType string) (length int64, err error)
|
||||
|
||||
@@ -881,6 +998,26 @@ if err != nil {
|
||||
}
|
||||
```
|
||||
|
||||
<a name="NewSSEInfo"></a>
|
||||
|
||||
### NewSSEInfo(key []byte, algo string) SSEInfo
|
||||
|
||||
Create a key object for use as encryption or decryption parameter in operations involving server-side-encryption with customer provided key (SSE-C).
|
||||
|
||||
__Parameters__
|
||||
|
||||
| Param | Type | Description |
|
||||
| :--- | :--- | :--- |
|
||||
| `key` | _[]byte_ | Byte-slice of the raw, un-encoded binary key |
|
||||
| `algo` | _string_ | Algorithm to use in encryption or decryption with the given key. Can be empty (defaults to `AES256`) |
|
||||
|
||||
__Example__
|
||||
|
||||
``` go
|
||||
// Key for use in encryption/decryption
|
||||
keyInfo := NewSSEInfo([]byte{1,2,3}, "")
|
||||
```
|
||||
|
||||
## 5. Presigned operations
|
||||
|
||||
<a name="PresignedGetObject"></a>
|
||||
|
||||
74
vendor/src/github.com/minio/minio-go/examples/s3/composeobject.go
vendored
Normal file
74
vendor/src/github.com/minio/minio-go/examples/s3/composeobject.go
vendored
Normal file
@@ -0,0 +1,74 @@
|
||||
// +build ignore
|
||||
|
||||
/*
|
||||
* Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2016 Minio, Inc.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
|
||||
minio "github.com/minio/minio-go"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Note: YOUR-ACCESSKEYID, YOUR-SECRETACCESSKEY, my-testfile, my-bucketname and
|
||||
// my-objectname are dummy values, please replace them with original values.
|
||||
|
||||
// Requests are always secure (HTTPS) by default. Set secure=false to enable insecure (HTTP) access.
|
||||
// This boolean value is the last argument for New().
|
||||
|
||||
// New returns an Amazon S3 compatible client object. API compatibility (v2 or v4) is automatically
|
||||
// determined based on the Endpoint value.
|
||||
s3Client, err := minio.New("s3.amazonaws.com", "YOUR-ACCESSKEYID", "YOUR-SECRETACCESSKEY", true)
|
||||
if err != nil {
|
||||
log.Fatalln(err)
|
||||
}
|
||||
|
||||
// Enable trace.
|
||||
// s3Client.TraceOn(os.Stderr)
|
||||
|
||||
// Prepare source decryption key (here we assume same key to
|
||||
// decrypt all source objects.)
|
||||
decKey := minio.NewSSEInfo([]byte{1, 2, 3}, "")
|
||||
|
||||
// Source objects to concatenate. We also specify decryption
|
||||
// key for each
|
||||
src1 := minio.NewSourceInfo("bucket1", "object1", decKey)
|
||||
src1.SetMatchETag("31624deb84149d2f8ef9c385918b653a")
|
||||
|
||||
src2 := minio.NewSourceInfo("bucket2", "object2", decKey)
|
||||
src2.SetMatchETag("f8ef9c385918b653a31624deb84149d2")
|
||||
|
||||
src3 := minio.NewSourceInfo("bucket3", "object3", decKey)
|
||||
src3.SetMatchETag("5918b653a31624deb84149d2f8ef9c38")
|
||||
|
||||
// Create slice of sources.
|
||||
srcs := []minio.SourceInfo{src1, src2, src3}
|
||||
|
||||
// Prepare destination encryption key
|
||||
encKey := minio.NewSSEInfo([]byte{8, 9, 0}, "")
|
||||
|
||||
// Create destination info
|
||||
dst := minio.NewDestinationInfo("bucket", "object", encKey)
|
||||
err = s3Client.ComposeObject(dst, srcs)
|
||||
if err != nil {
|
||||
log.Println(err)
|
||||
return
|
||||
}
|
||||
|
||||
log.Println("Composed object successfully.")
|
||||
}
|
||||
@@ -42,24 +42,28 @@ func main() {
|
||||
// Enable trace.
|
||||
// s3Client.TraceOn(os.Stderr)
|
||||
|
||||
// Source object
|
||||
src := minio.NewSourceInfo("my-sourcebucketname", "my-sourceobjectname", nil)
|
||||
|
||||
// All following conditions are allowed and can be combined together.
|
||||
|
||||
// Set copy conditions.
|
||||
var copyConds = minio.CopyConditions{}
|
||||
// Set modified condition, copy object modified since 2014 April.
|
||||
copyConds.SetModified(time.Date(2014, time.April, 0, 0, 0, 0, 0, time.UTC))
|
||||
src.SetModifiedSinceCond(time.Date(2014, time.April, 0, 0, 0, 0, 0, time.UTC))
|
||||
|
||||
// Set unmodified condition, copy object unmodified since 2014 April.
|
||||
// copyConds.SetUnmodified(time.Date(2014, time.April, 0, 0, 0, 0, 0, time.UTC))
|
||||
// src.SetUnmodifiedSinceCond(time.Date(2014, time.April, 0, 0, 0, 0, 0, time.UTC))
|
||||
|
||||
// Set matching ETag condition, copy object which matches the following ETag.
|
||||
// copyConds.SetMatchETag("31624deb84149d2f8ef9c385918b653a")
|
||||
// src.SetMatchETagCond("31624deb84149d2f8ef9c385918b653a")
|
||||
|
||||
// Set matching ETag except condition, copy object which does not match the following ETag.
|
||||
// copyConds.SetMatchETagExcept("31624deb84149d2f8ef9c385918b653a")
|
||||
// src.SetMatchETagExceptCond("31624deb84149d2f8ef9c385918b653a")
|
||||
|
||||
// Destination object
|
||||
dst := minio.NewDestinationInfo("my-bucketname", "my-objectname", nil)
|
||||
|
||||
// Initiate copy object.
|
||||
err = s3Client.CopyObject("my-bucketname", "my-objectname", "/my-sourcebucketname/my-sourceobjectname", copyConds)
|
||||
err = s3Client.CopyObject(dst, src)
|
||||
if err != nil {
|
||||
log.Fatalln(err)
|
||||
}
|
||||
|
||||
@@ -254,7 +254,18 @@ func (s *StreamingReader) Read(buf []byte) (int, error) {
|
||||
s.chunkBufLen = 0
|
||||
for {
|
||||
n1, err := s.baseReadCloser.Read(s.chunkBuf[s.chunkBufLen:])
|
||||
if err == nil || err == io.ErrUnexpectedEOF {
|
||||
// Usually we validate `err` first, but in this case
|
||||
// we are validating n > 0 for the following reasons.
|
||||
//
|
||||
// 1. n > 0, err is one of io.EOF, nil (near end of stream)
|
||||
// A Reader returning a non-zero number of bytes at the end
|
||||
// of the input stream may return either err == EOF or err == nil
|
||||
//
|
||||
// 2. n == 0, err is io.EOF (actual end of stream)
|
||||
//
|
||||
// Callers should always process the n > 0 bytes returned
|
||||
// before considering the error err.
|
||||
if n1 > 0 {
|
||||
s.chunkBufLen += n1
|
||||
s.bytesRead += int64(n1)
|
||||
|
||||
@@ -265,25 +276,26 @@ func (s *StreamingReader) Read(buf []byte) (int, error) {
|
||||
s.signChunk(s.chunkBufLen)
|
||||
break
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
// No more data left in baseReader - last chunk.
|
||||
// Done reading the last chunk from baseReader.
|
||||
s.done = true
|
||||
|
||||
} else if err == io.EOF {
|
||||
// No more data left in baseReader - last chunk.
|
||||
// Done reading the last chunk from baseReader.
|
||||
s.done = true
|
||||
// bytes read from baseReader different than
|
||||
// content length provided.
|
||||
if s.bytesRead != s.contentLen {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
// bytes read from baseReader different than
|
||||
// content length provided.
|
||||
if s.bytesRead != s.contentLen {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
// Sign the chunk and write it to s.buf.
|
||||
s.signChunk(0)
|
||||
break
|
||||
}
|
||||
|
||||
// Sign the chunk and write it to s.buf.
|
||||
s.signChunk(0)
|
||||
break
|
||||
|
||||
} else {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
return s.buf.Read(buf)
|
||||
|
||||
60
vendor/src/github.com/minio/minio-go/tempfile.go
vendored
60
vendor/src/github.com/minio/minio-go/tempfile.go
vendored
@@ -1,60 +0,0 @@
|
||||
/*
|
||||
* Minio Go Library for Amazon S3 Compatible Cloud Storage (C) 2015 Minio, Inc.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
package minio
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// tempFile - temporary file container.
|
||||
type tempFile struct {
|
||||
*os.File
|
||||
mutex *sync.Mutex
|
||||
}
|
||||
|
||||
// newTempFile returns a new temporary file, once closed it automatically deletes itself.
|
||||
func newTempFile(prefix string) (*tempFile, error) {
|
||||
// use platform specific temp directory.
|
||||
file, err := ioutil.TempFile(os.TempDir(), prefix)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &tempFile{
|
||||
File: file,
|
||||
mutex: &sync.Mutex{},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Close - closer wrapper to close and remove temporary file.
|
||||
func (t *tempFile) Close() error {
|
||||
t.mutex.Lock()
|
||||
defer t.mutex.Unlock()
|
||||
if t.File != nil {
|
||||
// Close the file.
|
||||
if err := t.File.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
// Remove file.
|
||||
if err := os.Remove(t.File.Name()); err != nil {
|
||||
return err
|
||||
}
|
||||
t.File = nil
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -122,7 +122,7 @@ func isValidEndpointURL(endpointURL url.URL) error {
|
||||
if endpointURL.Path != "/" && endpointURL.Path != "" {
|
||||
return ErrInvalidArgument("Endpoint url cannot have fully qualified paths.")
|
||||
}
|
||||
if strings.Contains(endpointURL.Host, ".amazonaws.com") {
|
||||
if strings.Contains(endpointURL.Host, ".s3.amazonaws.com") {
|
||||
if !s3utils.IsAmazonEndpoint(endpointURL) {
|
||||
return ErrInvalidArgument("Amazon S3 endpoint should be 's3.amazonaws.com'.")
|
||||
}
|
||||
|
||||
@@ -84,9 +84,9 @@ func TestGetEndpointURL(t *testing.T) {
|
||||
{"s3.cn-north-1.amazonaws.com.cn", false, "http://s3.cn-north-1.amazonaws.com.cn", nil, true},
|
||||
{"192.168.1.1:9000", false, "http://192.168.1.1:9000", nil, true},
|
||||
{"192.168.1.1:9000", true, "https://192.168.1.1:9000", nil, true},
|
||||
{"s3.amazonaws.com:443", true, "https://s3.amazonaws.com:443", nil, true},
|
||||
{"13333.123123.-", true, "", ErrInvalidArgument(fmt.Sprintf("Endpoint: %s does not follow ip address or domain name standards.", "13333.123123.-")), false},
|
||||
{"13333.123123.-", true, "", ErrInvalidArgument(fmt.Sprintf("Endpoint: %s does not follow ip address or domain name standards.", "13333.123123.-")), false},
|
||||
{"s3.amazonaws.com:443", true, "", ErrInvalidArgument("Amazon S3 endpoint should be 's3.amazonaws.com'."), false},
|
||||
{"storage.googleapis.com:4000", true, "", ErrInvalidArgument("Google Cloud Storage endpoint should be 'storage.googleapis.com'."), false},
|
||||
{"s3.aamzza.-", true, "", ErrInvalidArgument(fmt.Sprintf("Endpoint: %s does not follow ip address or domain name standards.", "s3.aamzza.-")), false},
|
||||
{"", true, "", ErrInvalidArgument("Endpoint: does not follow ip address or domain name standards."), false},
|
||||
@@ -132,10 +132,11 @@ func TestIsValidEndpointURL(t *testing.T) {
|
||||
{"https://s3-fips-us-gov-west-1.amazonaws.com", nil, true},
|
||||
{"https://s3.amazonaws.com/", nil, true},
|
||||
{"https://storage.googleapis.com/", nil, true},
|
||||
{"https://z3.amazonaws.com", nil, true},
|
||||
{"https://mybalancer.us-east-1.elb.amazonaws.com", nil, true},
|
||||
{"192.168.1.1", ErrInvalidArgument("Endpoint url cannot have fully qualified paths."), false},
|
||||
{"https://amazon.googleapis.com/", ErrInvalidArgument("Google Cloud Storage endpoint should be 'storage.googleapis.com'."), false},
|
||||
{"https://storage.googleapis.com/bucket/", ErrInvalidArgument("Endpoint url cannot have fully qualified paths."), false},
|
||||
{"https://z3.amazonaws.com", ErrInvalidArgument("Amazon S3 endpoint should be 's3.amazonaws.com'."), false},
|
||||
{"https://s3.amazonaws.com/bucket/object", ErrInvalidArgument("Endpoint url cannot have fully qualified paths."), false},
|
||||
}
|
||||
|
||||
|
||||
33
vendor/src/github.com/ncw/swift/swift_test.go
vendored
33
vendor/src/github.com/ncw/swift/swift_test.go
vendored
@@ -75,6 +75,7 @@ func makeConnection(t *testing.T) (*swift.Connection, func()) {
|
||||
ConnectionChannelTimeout := os.Getenv("SWIFT_CONNECTION_CHANNEL_TIMEOUT")
|
||||
DataChannelTimeout := os.Getenv("SWIFT_DATA_CHANNEL_TIMEOUT")
|
||||
|
||||
internalServer := false
|
||||
if UserName == "" || ApiKey == "" || AuthUrl == "" {
|
||||
srv, err = swifttest.NewSwiftServer("localhost")
|
||||
if err != nil && t != nil {
|
||||
@@ -84,6 +85,7 @@ func makeConnection(t *testing.T) (*swift.Connection, func()) {
|
||||
UserName = "swifttest"
|
||||
ApiKey = "swifttest"
|
||||
AuthUrl = srv.AuthURL
|
||||
internalServer = true
|
||||
}
|
||||
|
||||
transport := &http.Transport{
|
||||
@@ -105,6 +107,16 @@ func makeConnection(t *testing.T) (*swift.Connection, func()) {
|
||||
EndpointType: swift.EndpointType(EndpointType),
|
||||
}
|
||||
|
||||
if !internalServer {
|
||||
if isV3Api() {
|
||||
c.Tenant = os.Getenv("SWIFT_TENANT")
|
||||
c.Domain = os.Getenv("SWIFT_API_DOMAIN")
|
||||
} else {
|
||||
c.Tenant = os.Getenv("SWIFT_TENANT")
|
||||
c.TenantId = os.Getenv("SWIFT_TENANT_ID")
|
||||
}
|
||||
}
|
||||
|
||||
var timeout int64
|
||||
if ConnectionChannelTimeout != "" {
|
||||
timeout, err = strconv.ParseInt(ConnectionChannelTimeout, 10, 32)
|
||||
@@ -304,14 +316,6 @@ func TestTransport(t *testing.T) {
|
||||
|
||||
c.Transport = tr
|
||||
|
||||
if isV3Api() {
|
||||
c.Tenant = os.Getenv("SWIFT_TENANT")
|
||||
c.Domain = os.Getenv("SWIFT_API_DOMAIN")
|
||||
} else {
|
||||
c.Tenant = os.Getenv("SWIFT_TENANT")
|
||||
c.TenantId = os.Getenv("SWIFT_TENANT_ID")
|
||||
}
|
||||
|
||||
err := c.Authenticate()
|
||||
if err != nil {
|
||||
t.Fatal("Auth failed", err)
|
||||
@@ -329,9 +333,6 @@ func TestV1V2Authenticate(t *testing.T) {
|
||||
c, rollback := makeConnection(t)
|
||||
defer rollback()
|
||||
|
||||
c.Tenant = os.Getenv("SWIFT_TENANT")
|
||||
c.TenantId = os.Getenv("SWIFT_TENANT_ID")
|
||||
|
||||
err := c.Authenticate()
|
||||
if err != nil {
|
||||
t.Fatal("Auth failed", err)
|
||||
@@ -349,8 +350,10 @@ func TestV3AuthenticateWithDomainNameAndTenantId(t *testing.T) {
|
||||
c, rollback := makeConnection(t)
|
||||
defer rollback()
|
||||
|
||||
c.TenantId = os.Getenv("SWIFT_TENANT_ID")
|
||||
c.Tenant = ""
|
||||
c.Domain = os.Getenv("SWIFT_API_DOMAIN")
|
||||
c.TenantId = os.Getenv("SWIFT_TENANT_ID")
|
||||
c.DomainId = ""
|
||||
|
||||
err := c.Authenticate()
|
||||
if err != nil {
|
||||
@@ -388,6 +391,8 @@ func TestV3AuthenticateWithDomainIdAndTenantId(t *testing.T) {
|
||||
c, rollback := makeConnection(t)
|
||||
defer rollback()
|
||||
|
||||
c.Tenant = ""
|
||||
c.Domain = ""
|
||||
c.TenantId = os.Getenv("SWIFT_TENANT_ID")
|
||||
c.DomainId = os.Getenv("SWIFT_API_DOMAIN_ID")
|
||||
|
||||
@@ -410,6 +415,8 @@ func TestV3AuthenticateWithDomainNameAndTenantName(t *testing.T) {
|
||||
|
||||
c.Tenant = os.Getenv("SWIFT_TENANT")
|
||||
c.Domain = os.Getenv("SWIFT_API_DOMAIN")
|
||||
c.TenantId = ""
|
||||
c.DomainId = ""
|
||||
|
||||
err := c.Authenticate()
|
||||
if err != nil {
|
||||
@@ -429,6 +436,8 @@ func TestV3AuthenticateWithDomainIdAndTenantName(t *testing.T) {
|
||||
defer rollback()
|
||||
|
||||
c.Tenant = os.Getenv("SWIFT_TENANT")
|
||||
c.Domain = ""
|
||||
c.TenantId = ""
|
||||
c.DomainId = os.Getenv("SWIFT_API_DOMAIN_ID")
|
||||
|
||||
err := c.Authenticate()
|
||||
|
||||
@@ -15,6 +15,7 @@ func noErrors(at, depth int) error {
|
||||
}
|
||||
return noErrors(at+1, depth)
|
||||
}
|
||||
|
||||
func yesErrors(at, depth int) error {
|
||||
if at >= depth {
|
||||
return New("ye error")
|
||||
@@ -22,8 +23,11 @@ func yesErrors(at, depth int) error {
|
||||
return yesErrors(at+1, depth)
|
||||
}
|
||||
|
||||
// GlobalE is an exported global to store the result of benchmark results,
|
||||
// preventing the compiler from optimising the benchmark functions away.
|
||||
var GlobalE error
|
||||
|
||||
func BenchmarkErrors(b *testing.B) {
|
||||
var toperr error
|
||||
type run struct {
|
||||
stack int
|
||||
std bool
|
||||
@@ -53,7 +57,7 @@ func BenchmarkErrors(b *testing.B) {
|
||||
err = f(0, r.stack)
|
||||
}
|
||||
b.StopTimer()
|
||||
toperr = err
|
||||
GlobalE = err
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -196,7 +196,6 @@ func TestWithMessage(t *testing.T) {
|
||||
t.Errorf("WithMessage(%v, %q): got: %q, want %q", tt.err, tt.message, got, tt.want)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// errors.New, etc values are not expected to be compared by value
|
||||
|
||||
8
vendor/src/github.com/pkg/errors/stack.go
vendored
8
vendor/src/github.com/pkg/errors/stack.go
vendored
@@ -79,6 +79,14 @@ func (f Frame) Format(s fmt.State, verb rune) {
|
||||
// StackTrace is stack of Frames from innermost (newest) to outermost (oldest).
|
||||
type StackTrace []Frame
|
||||
|
||||
// Format formats the stack of Frames according to the fmt.Formatter interface.
|
||||
//
|
||||
// %s lists source files for each Frame in the stack
|
||||
// %v lists the source file and line number for each Frame in the stack
|
||||
//
|
||||
// Format accepts flags that alter the printing of some verbs, as follows:
|
||||
//
|
||||
// %+v Prints filename, function, and line number for each Frame in the stack.
|
||||
func (st StackTrace) Format(s fmt.State, verb rune) {
|
||||
switch verb {
|
||||
case 'v':
|
||||
|
||||
7
vendor/src/github.com/pkg/profile/README.md
vendored
7
vendor/src/github.com/pkg/profile/README.md
vendored
@@ -45,3 +45,10 @@ func main() {
|
||||
Several convenience package level values are provided for cpu, memory, and block (contention) profiling.
|
||||
|
||||
For more complex options, consult the [documentation](http://godoc.org/github.com/pkg/profile).
|
||||
|
||||
contributing
|
||||
------------
|
||||
|
||||
We welcome pull requests, bug fixes and issue reports.
|
||||
|
||||
Before proposing a change, please discuss it first by raising an issue.
|
||||
|
||||
@@ -31,7 +31,7 @@ func ExampleMemProfileRate() {
|
||||
|
||||
func ExampleProfilePath() {
|
||||
// set the location that the profile will be written to
|
||||
defer profile.Start(profile.ProfilePath(os.Getenv("HOME")))
|
||||
defer profile.Start(profile.ProfilePath(os.Getenv("HOME"))).Stop()
|
||||
}
|
||||
|
||||
func ExampleNoShutdownHook() {
|
||||
@@ -41,13 +41,15 @@ func ExampleNoShutdownHook() {
|
||||
|
||||
func ExampleStart_withFlags() {
|
||||
// use the flags package to selectively enable profiling.
|
||||
mode := flag.String("profile.mode", "", "enable profiling mode, one of [cpu, mem, block]")
|
||||
mode := flag.String("profile.mode", "", "enable profiling mode, one of [cpu, mem, mutex, block]")
|
||||
flag.Parse()
|
||||
switch *mode {
|
||||
case "cpu":
|
||||
defer profile.Start(profile.CPUProfile).Stop()
|
||||
case "mem":
|
||||
defer profile.Start(profile.MemProfile).Stop()
|
||||
case "mutex":
|
||||
defer profile.Start(profile.MutexProfile).Stop()
|
||||
case "block":
|
||||
defer profile.Start(profile.BlockProfile).Stop()
|
||||
default:
|
||||
|
||||
13
vendor/src/github.com/pkg/profile/mutex.go
vendored
Normal file
13
vendor/src/github.com/pkg/profile/mutex.go
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
// +build go1.8
|
||||
|
||||
package profile
|
||||
|
||||
import "runtime"
|
||||
|
||||
func enableMutexProfile() {
|
||||
runtime.SetMutexProfileFraction(1)
|
||||
}
|
||||
|
||||
func disableMutexProfile() {
|
||||
runtime.SetMutexProfileFraction(0)
|
||||
}
|
||||
9
vendor/src/github.com/pkg/profile/mutex17.go
vendored
Normal file
9
vendor/src/github.com/pkg/profile/mutex17.go
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
// +build !go1.8
|
||||
|
||||
package profile
|
||||
|
||||
// mock mutex support for Go 1.7 and earlier.
|
||||
|
||||
func enableMutexProfile() {}
|
||||
|
||||
func disableMutexProfile() {}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user