Vendor main dependencies.

This commit is contained in:
Timo Reimann 2017-02-07 22:33:23 +01:00
parent 49a09ab7dd
commit dd5e3fba01
2738 changed files with 1045689 additions and 0 deletions

202
vendor/github.com/vulcand/oxy/LICENSE generated vendored Normal file
View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

358
vendor/github.com/vulcand/oxy/cbreaker/cbreaker.go generated vendored Normal file
View file

@ -0,0 +1,358 @@
// package cbreaker implements circuit breaker similar to https://github.com/Netflix/Hystrix/wiki/How-it-Works
//
// Vulcan circuit breaker watches the error condtion to match
// after which it activates the fallback scenario, e.g. returns the response code
// or redirects the request to another location
// Circuit breakers start in the Standby state first, observing responses and watching location metrics.
//
// Once the Circuit breaker condition is met, it enters the "Tripped" state, where it activates fallback scenario
// for all requests during the FallbackDuration time period and reset the stats for the location.
//
// After FallbackDuration time period passes, Circuit breaker enters "Recovering" state, during that state it will
// start passing some traffic back to the endpoints, increasing the amount of passed requests using linear function:
//
// allowedRequestsRatio = 0.5 * (Now() - StartRecovery())/RecoveryDuration
//
// Two scenarios are possible in the "Recovering" state:
// 1. Condition matches again, this will reset the state to "Tripped" and reset the timer.
// 2. Condition does not match, circuit breaker enters "Standby" state
//
// It is possible to define actions (e.g. webhooks) of transitions between states:
//
// * OnTripped action is called on transition (Standby -> Tripped)
// * OnStandby action is called on transition (Recovering -> Standby)
//
package cbreaker
import (
"fmt"
"net/http"
"sync"
"time"
"github.com/mailgun/timetools"
"github.com/vulcand/oxy/memmetrics"
"github.com/vulcand/oxy/utils"
)
// CircuitBreaker is http.Handler that implements circuit breaker pattern
type CircuitBreaker struct {
m *sync.RWMutex
metrics *memmetrics.RTMetrics
condition hpredicate
fallbackDuration time.Duration
recoveryDuration time.Duration
onTripped SideEffect
onStandby SideEffect
state cbState
until time.Time
rc *ratioController
checkPeriod time.Duration
lastCheck time.Time
fallback http.Handler
next http.Handler
log utils.Logger
clock timetools.TimeProvider
}
// New creates a new CircuitBreaker middleware
func New(next http.Handler, expression string, options ...CircuitBreakerOption) (*CircuitBreaker, error) {
cb := &CircuitBreaker{
m: &sync.RWMutex{},
next: next,
// Default values. Might be overwritten by options below.
clock: &timetools.RealTime{},
checkPeriod: defaultCheckPeriod,
fallbackDuration: defaultFallbackDuration,
recoveryDuration: defaultRecoveryDuration,
fallback: defaultFallback,
log: utils.NullLogger,
}
for _, s := range options {
if err := s(cb); err != nil {
return nil, err
}
}
condition, err := parseExpression(expression)
if err != nil {
return nil, err
}
cb.condition = condition
mt, err := memmetrics.NewRTMetrics()
if err != nil {
return nil, err
}
cb.metrics = mt
return cb, nil
}
func (c *CircuitBreaker) ServeHTTP(w http.ResponseWriter, req *http.Request) {
if c.activateFallback(w, req) {
c.fallback.ServeHTTP(w, req)
return
}
c.serve(w, req)
}
func (c *CircuitBreaker) Wrap(next http.Handler) {
c.next = next
}
// updateState updates internal state and returns true if fallback should be used and false otherwise
func (c *CircuitBreaker) activateFallback(w http.ResponseWriter, req *http.Request) bool {
// Quick check with read locks optimized for normal operation use-case
if c.isStandby() {
return false
}
// Circuit breaker is in tripped or recovering state
c.m.Lock()
defer c.m.Unlock()
c.log.Infof("%v is in error state", c)
switch c.state {
case stateStandby:
// someone else has set it to standby just now
return false
case stateTripped:
if c.clock.UtcNow().Before(c.until) {
return true
}
// We have been in active state enough, enter recovering state
c.setRecovering()
fallthrough
case stateRecovering:
// We have been in recovering state enough, enter standby and allow request
if c.clock.UtcNow().After(c.until) {
c.setState(stateStandby, c.clock.UtcNow())
return false
}
// ratio controller allows this request
if c.rc.allowRequest() {
return false
}
return true
}
return false
}
func (c *CircuitBreaker) serve(w http.ResponseWriter, req *http.Request) {
start := c.clock.UtcNow()
p := &utils.ProxyWriter{W: w}
c.next.ServeHTTP(p, req)
latency := c.clock.UtcNow().Sub(start)
c.metrics.Record(p.Code, latency)
// Note that this call is less expensive than it looks -- checkCondition only performs the real check
// periodically. Because of that we can afford to call it here on every single response.
c.checkAndSet()
}
func (c *CircuitBreaker) isStandby() bool {
c.m.RLock()
defer c.m.RUnlock()
return c.state == stateStandby
}
// String returns log-friendly representation of the circuit breaker state
func (c *CircuitBreaker) String() string {
switch c.state {
case stateTripped, stateRecovering:
return fmt.Sprintf("CircuitBreaker(state=%v, until=%v)", c.state, c.until)
default:
return fmt.Sprintf("CircuitBreaker(state=%v)", c.state)
}
}
// exec executes side effect
func (c *CircuitBreaker) exec(s SideEffect) {
if s == nil {
return
}
go func() {
if err := s.Exec(); err != nil {
c.log.Errorf("%v side effect failure: %v", c, err)
}
}()
}
func (c *CircuitBreaker) setState(new cbState, until time.Time) {
c.log.Infof("%v setting state to %v, until %v", c, new, until)
c.state = new
c.until = until
switch new {
case stateTripped:
c.exec(c.onTripped)
case stateStandby:
c.exec(c.onStandby)
}
}
func (c *CircuitBreaker) timeToCheck() bool {
c.m.RLock()
defer c.m.RUnlock()
return c.clock.UtcNow().After(c.lastCheck)
}
// Checks if tripping condition matches and sets circuit breaker to the tripped state
func (c *CircuitBreaker) checkAndSet() {
if !c.timeToCheck() {
return
}
c.m.Lock()
defer c.m.Unlock()
// Other goroutine could have updated the lastCheck variable before we grabbed mutex
if !c.clock.UtcNow().After(c.lastCheck) {
return
}
c.lastCheck = c.clock.UtcNow().Add(c.checkPeriod)
if c.state == stateTripped {
c.log.Infof("%v skip set tripped", c)
return
}
if !c.condition(c) {
return
}
c.setState(stateTripped, c.clock.UtcNow().Add(c.fallbackDuration))
c.metrics.Reset()
}
func (c *CircuitBreaker) setRecovering() {
c.setState(stateRecovering, c.clock.UtcNow().Add(c.recoveryDuration))
c.rc = newRatioController(c.clock, c.recoveryDuration)
}
// CircuitBreakerOption represents an option you can pass to New.
// See the documentation for the individual options below.
type CircuitBreakerOption func(*CircuitBreaker) error
// Clock allows you to fake che CircuitBreaker's view of the current time.
// Intended for unit tests.
func Clock(clock timetools.TimeProvider) CircuitBreakerOption {
return func(c *CircuitBreaker) error {
c.clock = clock
return nil
}
}
// FallbackDuration is how long the CircuitBreaker will remain in the Tripped
// state before trying to recover.
func FallbackDuration(d time.Duration) CircuitBreakerOption {
return func(c *CircuitBreaker) error {
c.fallbackDuration = d
return nil
}
}
// RecoveryDuration is how long the CircuitBreaker will take to ramp up
// requests during the Recovering state.
func RecoveryDuration(d time.Duration) CircuitBreakerOption {
return func(c *CircuitBreaker) error {
c.recoveryDuration = d
return nil
}
}
// CheckPeriod is how long the CircuitBreaker will wait between successive
// checks of the breaker condition.
func CheckPeriod(d time.Duration) CircuitBreakerOption {
return func(c *CircuitBreaker) error {
c.checkPeriod = d
return nil
}
}
// OnTripped sets a SideEffect to run when entering the Tripped state.
// Only one SideEffect can be set for this hook.
func OnTripped(s SideEffect) CircuitBreakerOption {
return func(c *CircuitBreaker) error {
c.onTripped = s
return nil
}
}
// OnTripped sets a SideEffect to run when entering the Standby state.
// Only one SideEffect can be set for this hook.
func OnStandby(s SideEffect) CircuitBreakerOption {
return func(c *CircuitBreaker) error {
c.onStandby = s
return nil
}
}
// Fallback defines the http.Handler that the CircuitBreaker should route
// requests to when it prevents a request from taking its normal path.
func Fallback(h http.Handler) CircuitBreakerOption {
return func(c *CircuitBreaker) error {
c.fallback = h
return nil
}
}
// Logger adds logging for the CircuitBreaker.
func Logger(l utils.Logger) CircuitBreakerOption {
return func(c *CircuitBreaker) error {
c.log = l
return nil
}
}
// cbState is the state of the circuit breaker
type cbState int
func (s cbState) String() string {
switch s {
case stateStandby:
return "standby"
case stateTripped:
return "tripped"
case stateRecovering:
return "recovering"
}
return "undefined"
}
const (
// CircuitBreaker is passing all requests and watching stats
stateStandby = iota
// CircuitBreaker activates fallback scenario for all requests
stateTripped
// CircuitBreaker passes some requests to go through, rejecting others
stateRecovering
)
const (
defaultFallbackDuration = 10 * time.Second
defaultRecoveryDuration = 10 * time.Second
defaultCheckPeriod = 100 * time.Millisecond
)
var defaultFallback = &fallback{}
type fallback struct {
}
func (f *fallback) ServeHTTP(w http.ResponseWriter, req *http.Request) {
w.WriteHeader(http.StatusServiceUnavailable)
w.Write([]byte(http.StatusText(http.StatusServiceUnavailable)))
}

76
vendor/github.com/vulcand/oxy/cbreaker/effect.go generated vendored Normal file
View file

@ -0,0 +1,76 @@
package cbreaker
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"strings"
"github.com/vulcand/oxy/utils"
)
type SideEffect interface {
Exec() error
}
type Webhook struct {
URL string
Method string
Headers http.Header
Form url.Values
Body []byte
}
type WebhookSideEffect struct {
w Webhook
}
func NewWebhookSideEffect(w Webhook) (*WebhookSideEffect, error) {
if w.Method == "" {
return nil, fmt.Errorf("Supply method")
}
_, err := url.Parse(w.URL)
if err != nil {
return nil, err
}
return &WebhookSideEffect{w: w}, nil
}
func (w *WebhookSideEffect) getBody() io.Reader {
if len(w.w.Form) != 0 {
return strings.NewReader(w.w.Form.Encode())
}
if len(w.w.Body) != 0 {
return bytes.NewBuffer(w.w.Body)
}
return nil
}
func (w *WebhookSideEffect) Exec() error {
r, err := http.NewRequest(w.w.Method, w.w.URL, w.getBody())
if err != nil {
return err
}
if len(w.w.Headers) != 0 {
utils.CopyHeaders(r.Header, w.w.Headers)
}
if len(w.w.Form) != 0 {
r.Header.Set("Content-Type", "application/x-www-form-urlencoded")
}
re, err := http.DefaultClient.Do(r)
if err != nil {
return err
}
if re.Body != nil {
defer re.Body.Close()
}
_, err = ioutil.ReadAll(re.Body)
if err != nil {
return err
}
return nil
}

56
vendor/github.com/vulcand/oxy/cbreaker/fallback.go generated vendored Normal file
View file

@ -0,0 +1,56 @@
package cbreaker
import (
"fmt"
"net/http"
"net/url"
"strconv"
)
type Response struct {
StatusCode int
ContentType string
Body []byte
}
type ResponseFallback struct {
r Response
}
func NewResponseFallback(r Response) (*ResponseFallback, error) {
if r.StatusCode == 0 {
return nil, fmt.Errorf("response code should not be 0")
}
return &ResponseFallback{r: r}, nil
}
func (f *ResponseFallback) ServeHTTP(w http.ResponseWriter, req *http.Request) {
if f.r.ContentType != "" {
w.Header().Set("Content-Type", f.r.ContentType)
}
w.Header().Set("Content-Length", strconv.Itoa(len(f.r.Body)))
w.WriteHeader(f.r.StatusCode)
w.Write(f.r.Body)
}
type Redirect struct {
URL string
}
type RedirectFallback struct {
u *url.URL
}
func NewRedirectFallback(r Redirect) (*RedirectFallback, error) {
u, err := url.ParseRequestURI(r.URL)
if err != nil {
return nil, err
}
return &RedirectFallback{u: u}, nil
}
func (f *RedirectFallback) ServeHTTP(w http.ResponseWriter, req *http.Request) {
w.Header().Set("Location", f.u.String())
w.WriteHeader(http.StatusFound)
w.Write([]byte(http.StatusText(http.StatusFound)))
}

232
vendor/github.com/vulcand/oxy/cbreaker/predicates.go generated vendored Normal file
View file

@ -0,0 +1,232 @@
package cbreaker
import (
"fmt"
"time"
"github.com/vulcand/predicate"
)
type hpredicate func(*CircuitBreaker) bool
// parseExpression parses expression in the go language into predicates.
func parseExpression(in string) (hpredicate, error) {
p, err := predicate.NewParser(predicate.Def{
Operators: predicate.Operators{
AND: and,
OR: or,
EQ: eq,
NEQ: neq,
LT: lt,
LE: le,
GT: gt,
GE: ge,
},
Functions: map[string]interface{}{
"LatencyAtQuantileMS": latencyAtQuantile,
"NetworkErrorRatio": networkErrorRatio,
"ResponseCodeRatio": responseCodeRatio,
},
})
if err != nil {
return nil, err
}
out, err := p.Parse(in)
if err != nil {
return nil, err
}
pr, ok := out.(hpredicate)
if !ok {
return nil, fmt.Errorf("expected predicate, got %T", out)
}
return pr, nil
}
type toInt func(c *CircuitBreaker) int
type toFloat64 func(c *CircuitBreaker) float64
func latencyAtQuantile(quantile float64) toInt {
return func(c *CircuitBreaker) int {
h, err := c.metrics.LatencyHistogram()
if err != nil {
c.log.Errorf("Failed to get latency histogram, for %v error: %v", c, err)
return 0
}
return int(h.LatencyAtQuantile(quantile) / time.Millisecond)
}
}
func networkErrorRatio() toFloat64 {
return func(c *CircuitBreaker) float64 {
return c.metrics.NetworkErrorRatio()
}
}
func responseCodeRatio(startA, endA, startB, endB int) toFloat64 {
return func(c *CircuitBreaker) float64 {
return c.metrics.ResponseCodeRatio(startA, endA, startB, endB)
}
}
// or returns predicate by joining the passed predicates with logical 'or'
func or(fns ...hpredicate) hpredicate {
return func(c *CircuitBreaker) bool {
for _, fn := range fns {
if fn(c) {
return true
}
}
return false
}
}
// and returns predicate by joining the passed predicates with logical 'and'
func and(fns ...hpredicate) hpredicate {
return func(c *CircuitBreaker) bool {
for _, fn := range fns {
if !fn(c) {
return false
}
}
return true
}
}
// not creates negation of the passed predicate
func not(p hpredicate) hpredicate {
return func(c *CircuitBreaker) bool {
return !p(c)
}
}
// eq returns predicate that tests for equality of the value of the mapper and the constant
func eq(m interface{}, value interface{}) (hpredicate, error) {
switch mapper := m.(type) {
case toInt:
return intEQ(mapper, value)
case toFloat64:
return float64EQ(mapper, value)
}
return nil, fmt.Errorf("eq: unsupported argument: %T", m)
}
// neq returns predicate that tests for inequality of the value of the mapper and the constant
func neq(m interface{}, value interface{}) (hpredicate, error) {
p, err := eq(m, value)
if err != nil {
return nil, err
}
return not(p), nil
}
// lt returns predicate that tests that value of the mapper function is less than the constant
func lt(m interface{}, value interface{}) (hpredicate, error) {
switch mapper := m.(type) {
case toInt:
return intLT(mapper, value)
case toFloat64:
return float64LT(mapper, value)
}
return nil, fmt.Errorf("lt: unsupported argument: %T", m)
}
// le returns predicate that tests that value of the mapper function is less or equal than the constant
func le(m interface{}, value interface{}) (hpredicate, error) {
l, err := lt(m, value)
if err != nil {
return nil, err
}
e, err := eq(m, value)
if err != nil {
return nil, err
}
return func(c *CircuitBreaker) bool {
return l(c) || e(c)
}, nil
}
// gt returns predicate that tests that value of the mapper function is greater than the constant
func gt(m interface{}, value interface{}) (hpredicate, error) {
switch mapper := m.(type) {
case toInt:
return intGT(mapper, value)
case toFloat64:
return float64GT(mapper, value)
}
return nil, fmt.Errorf("gt: unsupported argument: %T", m)
}
// ge returns predicate that tests that value of the mapper function is less or equal than the constant
func ge(m interface{}, value interface{}) (hpredicate, error) {
g, err := gt(m, value)
if err != nil {
return nil, err
}
e, err := eq(m, value)
if err != nil {
return nil, err
}
return func(c *CircuitBreaker) bool {
return g(c) || e(c)
}, nil
}
func intEQ(m toInt, val interface{}) (hpredicate, error) {
value, ok := val.(int)
if !ok {
return nil, fmt.Errorf("expected int, got %T", val)
}
return func(c *CircuitBreaker) bool {
return m(c) == value
}, nil
}
func float64EQ(m toFloat64, val interface{}) (hpredicate, error) {
value, ok := val.(float64)
if !ok {
return nil, fmt.Errorf("expected float64, got %T", val)
}
return func(c *CircuitBreaker) bool {
return m(c) == value
}, nil
}
func intLT(m toInt, val interface{}) (hpredicate, error) {
value, ok := val.(int)
if !ok {
return nil, fmt.Errorf("expected int, got %T", val)
}
return func(c *CircuitBreaker) bool {
return m(c) < value
}, nil
}
func intGT(m toInt, val interface{}) (hpredicate, error) {
value, ok := val.(int)
if !ok {
return nil, fmt.Errorf("expected int, got %T", val)
}
return func(c *CircuitBreaker) bool {
return m(c) > value
}, nil
}
func float64LT(m toFloat64, val interface{}) (hpredicate, error) {
value, ok := val.(float64)
if !ok {
return nil, fmt.Errorf("expected int, got %T", val)
}
return func(c *CircuitBreaker) bool {
return m(c) < value
}, nil
}
func float64GT(m toFloat64, val interface{}) (hpredicate, error) {
value, ok := val.(float64)
if !ok {
return nil, fmt.Errorf("expected int, got %T", val)
}
return func(c *CircuitBreaker) bool {
return m(c) > value
}, nil
}

66
vendor/github.com/vulcand/oxy/cbreaker/ratio.go generated vendored Normal file
View file

@ -0,0 +1,66 @@
package cbreaker
import (
"fmt"
"time"
"github.com/mailgun/timetools"
)
// ratioController allows passing portions traffic back to the endpoints,
// increasing the amount of passed requests using linear function:
//
// allowedRequestsRatio = 0.5 * (Now() - Start())/Duration
//
type ratioController struct {
duration time.Duration
start time.Time
tm timetools.TimeProvider
allowed int
denied int
}
func newRatioController(tm timetools.TimeProvider, rampUp time.Duration) *ratioController {
return &ratioController{
duration: rampUp,
tm: tm,
start: tm.UtcNow(),
}
}
func (r *ratioController) String() string {
return fmt.Sprintf("RatioController(target=%f, current=%f, allowed=%d, denied=%d)", r.targetRatio(), r.computeRatio(r.allowed, r.denied), r.allowed, r.denied)
}
func (r *ratioController) allowRequest() bool {
t := r.targetRatio()
// This condition answers the question - would we satisfy the target ratio if we allow this request?
e := r.computeRatio(r.allowed+1, r.denied)
if e < t {
r.allowed++
return true
}
r.denied++
return false
}
func (r *ratioController) computeRatio(allowed, denied int) float64 {
if denied+allowed == 0 {
return 0
}
return float64(allowed) / float64(denied+allowed)
}
func (r *ratioController) targetRatio() float64 {
// Here's why it's 0.5:
// We are watching the following ratio
// ratio = a / (a + d)
// We can notice, that once we get to 0.5
// 0.5 = a / (a + d)
// we can evaluate that a = d
// that means equilibrium, where we would allow all the requests
// after this point to achieve ratio of 1 (that can never be reached unless d is 0)
// so we stop from there
multiplier := 0.5 / float64(r.duration)
return multiplier * float64(r.tm.UtcNow().Sub(r.start))
}

139
vendor/github.com/vulcand/oxy/connlimit/connlimit.go generated vendored Normal file
View file

@ -0,0 +1,139 @@
// package connlimit provides control over simultaneous connections coming from the same source
package connlimit
import (
"fmt"
"net/http"
"sync"
"github.com/vulcand/oxy/utils"
)
// Limiter tracks concurrent connection per token
// and is capable of rejecting connections if they are failed
type ConnLimiter struct {
mutex *sync.Mutex
extract utils.SourceExtractor
connections map[string]int64
maxConnections int64
totalConnections int64
next http.Handler
errHandler utils.ErrorHandler
log utils.Logger
}
func New(next http.Handler, extract utils.SourceExtractor, maxConnections int64, options ...ConnLimitOption) (*ConnLimiter, error) {
if extract == nil {
return nil, fmt.Errorf("Extract function can not be nil")
}
cl := &ConnLimiter{
mutex: &sync.Mutex{},
extract: extract,
maxConnections: maxConnections,
connections: make(map[string]int64),
next: next,
}
for _, o := range options {
if err := o(cl); err != nil {
return nil, err
}
}
if cl.log == nil {
cl.log = utils.NullLogger
}
if cl.errHandler == nil {
cl.errHandler = defaultErrHandler
}
return cl, nil
}
func (cl *ConnLimiter) Wrap(h http.Handler) {
cl.next = h
}
func (cl *ConnLimiter) ServeHTTP(w http.ResponseWriter, r *http.Request) {
token, amount, err := cl.extract.Extract(r)
if err != nil {
cl.log.Errorf("failed to extract source of the connection: %v", err)
cl.errHandler.ServeHTTP(w, r, err)
return
}
if err := cl.acquire(token, amount); err != nil {
cl.log.Infof("limiting request source %s: %v", token, err)
cl.errHandler.ServeHTTP(w, r, err)
return
}
defer cl.release(token, amount)
cl.next.ServeHTTP(w, r)
}
func (cl *ConnLimiter) acquire(token string, amount int64) error {
cl.mutex.Lock()
defer cl.mutex.Unlock()
connections := cl.connections[token]
if connections >= cl.maxConnections {
return &MaxConnError{max: cl.maxConnections}
}
cl.connections[token] += amount
cl.totalConnections += int64(amount)
return nil
}
func (cl *ConnLimiter) release(token string, amount int64) {
cl.mutex.Lock()
defer cl.mutex.Unlock()
cl.connections[token] -= amount
cl.totalConnections -= int64(amount)
// Otherwise it would grow forever
if cl.connections[token] == 0 {
delete(cl.connections, token)
}
}
type MaxConnError struct {
max int64
}
func (m *MaxConnError) Error() string {
return fmt.Sprintf("max connections reached: %d", m.max)
}
type ConnErrHandler struct {
}
func (e *ConnErrHandler) ServeHTTP(w http.ResponseWriter, req *http.Request, err error) {
if _, ok := err.(*MaxConnError); ok {
w.WriteHeader(429)
w.Write([]byte(err.Error()))
return
}
utils.DefaultHandler.ServeHTTP(w, req, err)
}
type ConnLimitOption func(l *ConnLimiter) error
// Logger sets the logger that will be used by this middleware.
func Logger(l utils.Logger) ConnLimitOption {
return func(cl *ConnLimiter) error {
cl.log = l
return nil
}
}
// ErrorHandler sets error handler of the server
func ErrorHandler(h utils.ErrorHandler) ConnLimitOption {
return func(cl *ConnLimiter) error {
cl.errHandler = h
return nil
}
}
var defaultErrHandler = &ConnErrHandler{}

342
vendor/github.com/vulcand/oxy/forward/fwd.go generated vendored Normal file
View file

@ -0,0 +1,342 @@
// package forwarder implements http handler that forwards requests to remote server
// and serves back the response
// websocket proxying support based on https://github.com/yhat/wsutil
package forward
import (
"crypto/tls"
"io"
"net"
"net/http"
"net/url"
"os"
"reflect"
"strconv"
"strings"
"time"
"github.com/vulcand/oxy/utils"
)
// ReqRewriter can alter request headers and body
type ReqRewriter interface {
Rewrite(r *http.Request)
}
type optSetter func(f *Forwarder) error
// PassHostHeader specifies if a client's Host header field should
// be delegated
func PassHostHeader(b bool) optSetter {
return func(f *Forwarder) error {
f.passHost = b
return nil
}
}
// StreamResponse forces streaming body (flushes response directly to client)
func StreamResponse(b bool) optSetter {
return func(f *Forwarder) error {
f.httpForwarder.streamResponse = b
return nil
}
}
// RoundTripper sets a new http.RoundTripper
// Forwarder will use http.DefaultTransport as a default round tripper
func RoundTripper(r http.RoundTripper) optSetter {
return func(f *Forwarder) error {
f.roundTripper = r
return nil
}
}
// Rewriter defines a request rewriter for the HTTP forwarder
func Rewriter(r ReqRewriter) optSetter {
return func(f *Forwarder) error {
f.httpForwarder.rewriter = r
return nil
}
}
// WebsocketRewriter defines a request rewriter for the websocket forwarder
func WebsocketRewriter(r ReqRewriter) optSetter {
return func(f *Forwarder) error {
f.websocketForwarder.rewriter = r
return nil
}
}
// ErrorHandler is a functional argument that sets error handler of the server
func ErrorHandler(h utils.ErrorHandler) optSetter {
return func(f *Forwarder) error {
f.errHandler = h
return nil
}
}
// Logger specifies the logger to use.
// Forwarder will default to oxyutils.NullLogger if no logger has been specified
func Logger(l utils.Logger) optSetter {
return func(f *Forwarder) error {
f.log = l
return nil
}
}
// Forwarder wraps two traffic forwarding implementations: HTTP and websockets.
// It decides based on the specified request which implementation to use
type Forwarder struct {
*httpForwarder
*websocketForwarder
*handlerContext
}
// handlerContext defines a handler context for error reporting and logging
type handlerContext struct {
errHandler utils.ErrorHandler
log utils.Logger
}
// httpForwarder is a handler that can reverse proxy
// HTTP traffic
type httpForwarder struct {
roundTripper http.RoundTripper
rewriter ReqRewriter
passHost bool
streamResponse bool
}
// websocketForwarder is a handler that can reverse proxy
// websocket traffic
type websocketForwarder struct {
rewriter ReqRewriter
TLSClientConfig *tls.Config
}
// New creates an instance of Forwarder based on the provided list of configuration options
func New(setters ...optSetter) (*Forwarder, error) {
f := &Forwarder{
httpForwarder: &httpForwarder{},
websocketForwarder: &websocketForwarder{},
handlerContext: &handlerContext{},
}
for _, s := range setters {
if err := s(f); err != nil {
return nil, err
}
}
if f.httpForwarder.roundTripper == nil {
f.httpForwarder.roundTripper = http.DefaultTransport
}
if f.httpForwarder.rewriter == nil {
h, err := os.Hostname()
if err != nil {
h = "localhost"
}
f.httpForwarder.rewriter = &HeaderRewriter{TrustForwardHeader: true, Hostname: h}
}
if f.log == nil {
f.log = utils.NullLogger
}
if f.errHandler == nil {
f.errHandler = utils.DefaultHandler
}
return f, nil
}
// ServeHTTP decides which forwarder to use based on the specified
// request and delegates to the proper implementation
func (f *Forwarder) ServeHTTP(w http.ResponseWriter, req *http.Request) {
if isWebsocketRequest(req) {
f.websocketForwarder.serveHTTP(w, req, f.handlerContext)
} else {
f.httpForwarder.serveHTTP(w, req, f.handlerContext)
}
}
// serveHTTP forwards HTTP traffic using the configured transport
func (f *httpForwarder) serveHTTP(w http.ResponseWriter, req *http.Request, ctx *handlerContext) {
start := time.Now().UTC()
response, err := f.roundTripper.RoundTrip(f.copyRequest(req, req.URL))
if err != nil {
ctx.log.Errorf("Error forwarding to %v, err: %v", req.URL, err)
ctx.errHandler.ServeHTTP(w, req, err)
return
}
utils.CopyHeaders(w.Header(), response.Header)
// Remove hop-by-hop headers.
utils.RemoveHeaders(w.Header(), HopHeaders...)
w.WriteHeader(response.StatusCode)
stream := f.streamResponse
if ! stream {
contentType, err := utils.GetHeaderMediaType(response.Header, ContentType)
if err == nil {
stream = contentType == "text/event-stream"
}
}
written, err := io.Copy(newResponseFlusher(w, stream), response.Body)
if req.TLS != nil {
ctx.log.Infof("Round trip: %v, code: %v, duration: %v tls:version: %x, tls:resume:%t, tls:csuite:%x, tls:server:%v",
req.URL, response.StatusCode, time.Now().UTC().Sub(start),
req.TLS.Version,
req.TLS.DidResume,
req.TLS.CipherSuite,
req.TLS.ServerName)
} else {
ctx.log.Infof("Round trip: %v, code: %v, duration: %v",
req.URL, response.StatusCode, time.Now().UTC().Sub(start))
}
defer response.Body.Close()
if err != nil {
ctx.log.Errorf("Error copying upstream response Body: %v", err)
ctx.errHandler.ServeHTTP(w, req, err)
return
}
if written != 0 {
w.Header().Set(ContentLength, strconv.FormatInt(written, 10))
}
}
// copyRequest makes a copy of the specified request to be sent using the configured
// transport
func (f *httpForwarder) copyRequest(req *http.Request, u *url.URL) *http.Request {
outReq := new(http.Request)
*outReq = *req // includes shallow copies of maps, but we handle this below
outReq.URL = utils.CopyURL(req.URL)
outReq.URL.Scheme = u.Scheme
outReq.URL.Host = u.Host
outReq.URL.Opaque = req.RequestURI
// raw query is already included in RequestURI, so ignore it to avoid dupes
outReq.URL.RawQuery = ""
// Do not pass client Host header unless optsetter PassHostHeader is set.
if !f.passHost {
outReq.Host = u.Host
}
outReq.Proto = "HTTP/1.1"
outReq.ProtoMajor = 1
outReq.ProtoMinor = 1
// Overwrite close flag so we can keep persistent connection for the backend servers
outReq.Close = false
outReq.Header = make(http.Header)
utils.CopyHeaders(outReq.Header, req.Header)
if f.rewriter != nil {
f.rewriter.Rewrite(outReq)
}
return outReq
}
// serveHTTP forwards websocket traffic
func (f *websocketForwarder) serveHTTP(w http.ResponseWriter, req *http.Request, ctx *handlerContext) {
outReq := f.copyRequest(req, req.URL)
host := outReq.URL.Host
dial := net.Dial
// if host does not specify a port, use the default http port
if !strings.Contains(host, ":") {
if outReq.URL.Scheme == "wss" {
host = host + ":443"
} else {
host = host + ":80"
}
}
if outReq.URL.Scheme == "wss" {
if f.TLSClientConfig == nil {
f.TLSClientConfig = &tls.Config{}
}
dial = func(network, address string) (net.Conn, error) {
return tls.Dial("tcp", host, f.TLSClientConfig)
}
}
targetConn, err := dial("tcp", host)
if err != nil {
ctx.log.Errorf("Error dialing `%v`: %v", host, err)
ctx.errHandler.ServeHTTP(w, req, err)
return
}
hijacker, ok := w.(http.Hijacker)
if !ok {
ctx.log.Errorf("Unable to hijack the connection: %v", reflect.TypeOf(w))
ctx.errHandler.ServeHTTP(w, req, nil)
return
}
underlyingConn, _, err := hijacker.Hijack()
if err != nil {
ctx.log.Errorf("Unable to hijack the connection: %v %v", reflect.TypeOf(w), err)
ctx.errHandler.ServeHTTP(w, req, err)
return
}
// it is now caller's responsibility to Close the underlying connection
defer underlyingConn.Close()
defer targetConn.Close()
// write the modified incoming request to the dialed connection
if err = outReq.Write(targetConn); err != nil {
ctx.log.Errorf("Unable to copy request to target: %v", err)
ctx.errHandler.ServeHTTP(w, req, err)
return
}
errc := make(chan error, 2)
replicate := func(dst io.Writer, src io.Reader) {
_, err := io.Copy(dst, src)
errc <- err
}
go replicate(targetConn, underlyingConn)
go replicate(underlyingConn, targetConn)
<-errc
}
// copyRequest makes a copy of the specified request.
func (f *websocketForwarder) copyRequest(req *http.Request, u *url.URL) (outReq *http.Request) {
outReq = new(http.Request)
*outReq = *req // includes shallow copies of maps, but we handle this below
outReq.URL = utils.CopyURL(req.URL)
outReq.URL.Scheme = u.Scheme
outReq.URL.Host = u.Host
outReq.URL.Opaque = req.RequestURI
// raw query is already included in RequestURI, so ignore it to avoid dupes
outReq.URL.RawQuery = ""
outReq.Proto = "HTTP/1.1"
outReq.ProtoMajor = 1
outReq.ProtoMinor = 1
// Overwrite close flag so we can keep persistent connection for the backend servers
outReq.Close = false
outReq.Header = make(http.Header)
utils.CopyHeaders(outReq.Header, req.Header)
if f.rewriter != nil {
f.rewriter.Rewrite(outReq)
}
return outReq
}
// isWebsocketRequest determines if the specified HTTP request is a
// websocket handshake request
func isWebsocketRequest(req *http.Request) bool {
containsHeader := func(name, value string) bool {
items := strings.Split(req.Header.Get(name), ",")
for _, item := range items {
if value == strings.ToLower(strings.TrimSpace(item)) {
return true
}
}
return false
}
return containsHeader(Connection, "upgrade") && containsHeader(Upgrade, "websocket")
}

32
vendor/github.com/vulcand/oxy/forward/headers.go generated vendored Normal file
View file

@ -0,0 +1,32 @@
package forward
const (
XForwardedProto = "X-Forwarded-Proto"
XForwardedFor = "X-Forwarded-For"
XForwardedHost = "X-Forwarded-Host"
XForwardedServer = "X-Forwarded-Server"
Connection = "Connection"
KeepAlive = "Keep-Alive"
ProxyAuthenticate = "Proxy-Authenticate"
ProxyAuthorization = "Proxy-Authorization"
Te = "Te" // canonicalized version of "TE"
Trailers = "Trailers"
TransferEncoding = "Transfer-Encoding"
Upgrade = "Upgrade"
ContentLength = "Content-Length"
ContentType = "Content-Type"
)
// Hop-by-hop headers. These are removed when sent to the backend.
// http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html
// Copied from reverseproxy.go, too bad
var HopHeaders = []string{
Connection,
KeepAlive,
ProxyAuthenticate,
ProxyAuthorization,
Te, // canonicalized version of "TE"
Trailers,
TransferEncoding,
Upgrade,
}

View file

@ -0,0 +1,53 @@
package forward
import (
"bufio"
"fmt"
"net"
"net/http"
)
var (
_ http.Hijacker = &responseFlusher{}
_ http.Flusher = &responseFlusher{}
_ http.CloseNotifier = &responseFlusher{}
)
type responseFlusher struct {
http.ResponseWriter
flush bool
}
func newResponseFlusher(rw http.ResponseWriter, flush bool) *responseFlusher {
return &responseFlusher{
ResponseWriter: rw,
flush: flush,
}
}
func (wf *responseFlusher) Write(p []byte) (int, error) {
written, err := wf.ResponseWriter.Write(p)
if wf.flush {
wf.Flush()
}
return written, err
}
func (wf *responseFlusher) Hijack() (net.Conn, *bufio.ReadWriter, error) {
hijacker, ok := wf.ResponseWriter.(http.Hijacker)
if !ok {
return nil, nil, fmt.Errorf("the ResponseWriter doesn't support the Hijacker interface")
}
return hijacker.Hijack()
}
func (wf *responseFlusher) CloseNotify() <-chan bool {
return wf.ResponseWriter.(http.CloseNotifier).CloseNotify()
}
func (wf *responseFlusher) Flush() {
flusher, ok := wf.ResponseWriter.(http.Flusher)
if ok {
flusher.Flush()
}
}

48
vendor/github.com/vulcand/oxy/forward/rewrite.go generated vendored Normal file
View file

@ -0,0 +1,48 @@
package forward
import (
"net"
"net/http"
"strings"
"github.com/vulcand/oxy/utils"
)
// Rewriter is responsible for removing hop-by-hop headers and setting forwarding headers
type HeaderRewriter struct {
TrustForwardHeader bool
Hostname string
}
func (rw *HeaderRewriter) Rewrite(req *http.Request) {
if clientIP, _, err := net.SplitHostPort(req.RemoteAddr); err == nil {
if rw.TrustForwardHeader {
if prior, ok := req.Header[XForwardedFor]; ok {
clientIP = strings.Join(prior, ", ") + ", " + clientIP
}
}
req.Header.Set(XForwardedFor, clientIP)
}
if xfp := req.Header.Get(XForwardedProto); xfp != "" && rw.TrustForwardHeader {
req.Header.Set(XForwardedProto, xfp)
} else if req.TLS != nil {
req.Header.Set(XForwardedProto, "https")
} else {
req.Header.Set(XForwardedProto, "http")
}
if xfh := req.Header.Get(XForwardedHost); xfh != "" && rw.TrustForwardHeader {
req.Header.Set(XForwardedHost, xfh)
} else if req.Host != "" {
req.Header.Set(XForwardedHost, req.Host)
}
if rw.Hostname != "" {
req.Header.Set(XForwardedServer, rw.Hostname)
}
// Remove hop-by-hop headers to the backend. Especially important is "Connection" because we want a persistent
// connection, regardless of what the client sent to us.
utils.RemoveHeaders(req.Header, HopHeaders...)
}

99
vendor/github.com/vulcand/oxy/memmetrics/anomaly.go generated vendored Normal file
View file

@ -0,0 +1,99 @@
package memmetrics
import (
"math"
"sort"
"time"
)
// SplitRatios provides simple anomaly detection for requests latencies.
// it splits values into good or bad category based on the threshold and the median value.
// If all values are not far from the median, it will return all values in 'good' set.
// Precision is the smallest value to consider, e.g. if set to millisecond, microseconds will be ignored.
func SplitLatencies(values []time.Duration, precision time.Duration) (good map[time.Duration]bool, bad map[time.Duration]bool) {
// Find the max latency M and then map each latency L to the ratio L/M and then call SplitFloat64
v2r := map[float64]time.Duration{}
ratios := make([]float64, len(values))
m := maxTime(values)
for i, v := range values {
ratio := float64(v/precision+1) / float64(m/precision+1) // +1 is to avoid division by 0
v2r[ratio] = v
ratios[i] = ratio
}
good, bad = make(map[time.Duration]bool), make(map[time.Duration]bool)
// Note that multiplier makes this function way less sensitive than ratios detector, this is to avoid noise.
vgood, vbad := SplitFloat64(2, 0, ratios)
for r, _ := range vgood {
good[v2r[r]] = true
}
for r, _ := range vbad {
bad[v2r[r]] = true
}
return good, bad
}
// SplitRatios provides simple anomaly detection for ratio values, that are all in the range [0, 1]
// it splits values into good or bad category based on the threshold and the median value.
// If all values are not far from the median, it will return all values in 'good' set.
func SplitRatios(values []float64) (good map[float64]bool, bad map[float64]bool) {
return SplitFloat64(1.5, 0, values)
}
// SplitFloat64 provides simple anomaly detection for skewed data sets with no particular distribution.
// In essense it applies the formula if(v > median(values) + threshold * medianAbsoluteDeviation) -> anomaly
// There's a corner case where there are just 2 values, so by definition there's no value that exceeds the threshold.
// This case is solved by introducing additional value that we know is good, e.g. 0. That helps to improve the detection results
// on such data sets.
func SplitFloat64(threshold, sentinel float64, values []float64) (good map[float64]bool, bad map[float64]bool) {
good, bad = make(map[float64]bool), make(map[float64]bool)
var newValues []float64
if len(values)%2 == 0 {
newValues = make([]float64, len(values)+1)
copy(newValues, values)
// Add a sentinel endpoint so we can distinguish outliers better
newValues[len(newValues)-1] = sentinel
} else {
newValues = values
}
m := median(newValues)
mAbs := medianAbsoluteDeviation(newValues)
for _, v := range values {
if v > (m+mAbs)*threshold {
bad[v] = true
} else {
good[v] = true
}
}
return good, bad
}
func median(values []float64) float64 {
vals := make([]float64, len(values))
copy(vals, values)
sort.Float64s(vals)
l := len(vals)
if l%2 != 0 {
return vals[l/2]
}
return (vals[l/2-1] + vals[l/2]) / 2.0
}
func medianAbsoluteDeviation(values []float64) float64 {
m := median(values)
distances := make([]float64, len(values))
for i, v := range values {
distances[i] = math.Abs(v - m)
}
return median(distances)
}
func maxTime(vals []time.Duration) time.Duration {
val := vals[0]
for _, v := range vals {
if v > val {
val = v
}
}
return val
}

155
vendor/github.com/vulcand/oxy/memmetrics/counter.go generated vendored Normal file
View file

@ -0,0 +1,155 @@
package memmetrics
import (
"fmt"
"time"
"github.com/mailgun/timetools"
)
type rcOptSetter func(*RollingCounter) error
func CounterClock(c timetools.TimeProvider) rcOptSetter {
return func(r *RollingCounter) error {
r.clock = c
return nil
}
}
// Calculates in memory failure rate of an endpoint using rolling window of a predefined size
type RollingCounter struct {
clock timetools.TimeProvider
resolution time.Duration
values []int
countedBuckets int // how many samples in different buckets have we collected so far
lastBucket int // last recorded bucket
lastUpdated time.Time
}
// NewCounter creates a counter with fixed amount of buckets that are rotated every resolution period.
// E.g. 10 buckets with 1 second means that every new second the bucket is refreshed, so it maintains 10 second rolling window.
// By default creates a bucket with 10 buckets and 1 second resolution
func NewCounter(buckets int, resolution time.Duration, options ...rcOptSetter) (*RollingCounter, error) {
if buckets <= 0 {
return nil, fmt.Errorf("Buckets should be >= 0")
}
if resolution < time.Second {
return nil, fmt.Errorf("Resolution should be larger than a second")
}
rc := &RollingCounter{
lastBucket: -1,
resolution: resolution,
values: make([]int, buckets),
}
for _, o := range options {
if err := o(rc); err != nil {
return nil, err
}
}
if rc.clock == nil {
rc.clock = &timetools.RealTime{}
}
return rc, nil
}
func (c *RollingCounter) Append(o *RollingCounter) error {
c.Inc(int(o.Count()))
return nil
}
func (c *RollingCounter) Clone() *RollingCounter {
c.cleanup()
other := &RollingCounter{
resolution: c.resolution,
values: make([]int, len(c.values)),
clock: c.clock,
lastBucket: c.lastBucket,
lastUpdated: c.lastUpdated,
}
for i, v := range c.values {
other.values[i] = v
}
return other
}
func (c *RollingCounter) Reset() {
c.lastBucket = -1
c.countedBuckets = 0
c.lastUpdated = time.Time{}
for i := range c.values {
c.values[i] = 0
}
}
func (c *RollingCounter) CountedBuckets() int {
return c.countedBuckets
}
func (c *RollingCounter) Count() int64 {
c.cleanup()
return c.sum()
}
func (c *RollingCounter) Resolution() time.Duration {
return c.resolution
}
func (c *RollingCounter) Buckets() int {
return len(c.values)
}
func (c *RollingCounter) WindowSize() time.Duration {
return time.Duration(len(c.values)) * c.resolution
}
func (c *RollingCounter) Inc(v int) {
c.cleanup()
c.incBucketValue(v)
}
func (c *RollingCounter) incBucketValue(v int) {
now := c.clock.UtcNow()
bucket := c.getBucket(now)
c.values[bucket] += v
c.lastUpdated = now
// Update usage stats if we haven't collected enough data
if c.countedBuckets < len(c.values) {
// Only update if we have advanced to the next bucket and not incremented the value
// in the current bucket.
if c.lastBucket != bucket {
c.lastBucket = bucket
c.countedBuckets++
}
}
}
// Returns the number in the moving window bucket that this slot occupies
func (c *RollingCounter) getBucket(t time.Time) int {
return int(t.Truncate(c.resolution).Unix() % int64(len(c.values)))
}
// Reset buckets that were not updated
func (c *RollingCounter) cleanup() {
now := c.clock.UtcNow()
for i := 0; i < len(c.values); i++ {
now = now.Add(time.Duration(-1*i) * c.resolution)
if now.Truncate(c.resolution).After(c.lastUpdated.Truncate(c.resolution)) {
c.values[c.getBucket(now)] = 0
} else {
break
}
}
}
func (c *RollingCounter) sum() int64 {
out := int64(0)
for _, v := range c.values {
out += int64(v)
}
return out
}

174
vendor/github.com/vulcand/oxy/memmetrics/histogram.go generated vendored Normal file
View file

@ -0,0 +1,174 @@
package memmetrics
import (
"fmt"
"time"
"github.com/codahale/hdrhistogram"
"github.com/mailgun/timetools"
)
// HDRHistogram is a tiny wrapper around github.com/codahale/hdrhistogram that provides convenience functions for measuring http latencies
type HDRHistogram struct {
// lowest trackable value
low int64
// highest trackable value
high int64
// significant figures
sigfigs int
h *hdrhistogram.Histogram
}
func NewHDRHistogram(low, high int64, sigfigs int) (h *HDRHistogram, err error) {
defer func() {
if msg := recover(); msg != nil {
err = fmt.Errorf("%s", msg)
}
}()
return &HDRHistogram{
low: low,
high: high,
sigfigs: sigfigs,
h: hdrhistogram.New(low, high, sigfigs),
}, nil
}
// Returns latency at quantile with microsecond precision
func (h *HDRHistogram) LatencyAtQuantile(q float64) time.Duration {
return time.Duration(h.ValueAtQuantile(q)) * time.Microsecond
}
// Records latencies with microsecond precision
func (h *HDRHistogram) RecordLatencies(d time.Duration, n int64) error {
return h.RecordValues(int64(d/time.Microsecond), n)
}
func (h *HDRHistogram) Reset() {
h.h.Reset()
}
func (h *HDRHistogram) ValueAtQuantile(q float64) int64 {
return h.h.ValueAtQuantile(q)
}
func (h *HDRHistogram) RecordValues(v, n int64) error {
return h.h.RecordValues(v, n)
}
func (h *HDRHistogram) Merge(other *HDRHistogram) error {
if other == nil {
return fmt.Errorf("other is nil")
}
h.h.Merge(other.h)
return nil
}
type rhOptSetter func(r *RollingHDRHistogram) error
func RollingClock(clock timetools.TimeProvider) rhOptSetter {
return func(r *RollingHDRHistogram) error {
r.clock = clock
return nil
}
}
// RollingHistogram holds multiple histograms and rotates every period.
// It provides resulting histogram as a result of a call of 'Merged' function.
type RollingHDRHistogram struct {
idx int
lastRoll time.Time
period time.Duration
bucketCount int
low int64
high int64
sigfigs int
buckets []*HDRHistogram
clock timetools.TimeProvider
}
func NewRollingHDRHistogram(low, high int64, sigfigs int, period time.Duration, bucketCount int, options ...rhOptSetter) (*RollingHDRHistogram, error) {
rh := &RollingHDRHistogram{
bucketCount: bucketCount,
period: period,
low: low,
high: high,
sigfigs: sigfigs,
}
for _, o := range options {
if err := o(rh); err != nil {
return nil, err
}
}
if rh.clock == nil {
rh.clock = &timetools.RealTime{}
}
buckets := make([]*HDRHistogram, rh.bucketCount)
for i := range buckets {
h, err := NewHDRHistogram(low, high, sigfigs)
if err != nil {
return nil, err
}
buckets[i] = h
}
rh.buckets = buckets
return rh, nil
}
func (r *RollingHDRHistogram) Append(o *RollingHDRHistogram) error {
if r.bucketCount != o.bucketCount || r.period != o.period || r.low != o.low || r.high != o.high || r.sigfigs != o.sigfigs {
return fmt.Errorf("can't merge")
}
for i := range r.buckets {
if err := r.buckets[i].Merge(o.buckets[i]); err != nil {
return err
}
}
return nil
}
func (r *RollingHDRHistogram) Reset() {
r.idx = 0
r.lastRoll = r.clock.UtcNow()
for _, b := range r.buckets {
b.Reset()
}
}
func (r *RollingHDRHistogram) rotate() {
r.idx = (r.idx + 1) % len(r.buckets)
r.buckets[r.idx].Reset()
}
func (r *RollingHDRHistogram) Merged() (*HDRHistogram, error) {
m, err := NewHDRHistogram(r.low, r.high, r.sigfigs)
if err != nil {
return m, err
}
for _, h := range r.buckets {
if m.Merge(h); err != nil {
return nil, err
}
}
return m, nil
}
func (r *RollingHDRHistogram) getHist() *HDRHistogram {
if r.clock.UtcNow().Sub(r.lastRoll) >= r.period {
r.rotate()
r.lastRoll = r.clock.UtcNow()
}
return r.buckets[r.idx]
}
func (r *RollingHDRHistogram) RecordLatencies(v time.Duration, n int64) error {
return r.getHist().RecordLatencies(v, n)
}
func (r *RollingHDRHistogram) RecordValues(v, n int64) error {
return r.getHist().RecordValues(v, n)
}

120
vendor/github.com/vulcand/oxy/memmetrics/ratio.go generated vendored Normal file
View file

@ -0,0 +1,120 @@
package memmetrics
import (
"time"
"github.com/mailgun/timetools"
)
type ratioOptSetter func(r *RatioCounter) error
func RatioClock(clock timetools.TimeProvider) ratioOptSetter {
return func(r *RatioCounter) error {
r.clock = clock
return nil
}
}
// RatioCounter calculates a ratio of a/a+b over a rolling window of predefined buckets
type RatioCounter struct {
clock timetools.TimeProvider
a *RollingCounter
b *RollingCounter
}
func NewRatioCounter(buckets int, resolution time.Duration, options ...ratioOptSetter) (*RatioCounter, error) {
rc := &RatioCounter{}
for _, o := range options {
if err := o(rc); err != nil {
return nil, err
}
}
if rc.clock == nil {
rc.clock = &timetools.RealTime{}
}
a, err := NewCounter(buckets, resolution, CounterClock(rc.clock))
if err != nil {
return nil, err
}
b, err := NewCounter(buckets, resolution, CounterClock(rc.clock))
if err != nil {
return nil, err
}
rc.a = a
rc.b = b
return rc, nil
}
func (r *RatioCounter) Reset() {
r.a.Reset()
r.b.Reset()
}
func (r *RatioCounter) IsReady() bool {
return r.a.countedBuckets+r.b.countedBuckets >= len(r.a.values)
}
func (r *RatioCounter) CountA() int64 {
return r.a.Count()
}
func (r *RatioCounter) CountB() int64 {
return r.b.Count()
}
func (r *RatioCounter) Resolution() time.Duration {
return r.a.Resolution()
}
func (r *RatioCounter) Buckets() int {
return r.a.Buckets()
}
func (r *RatioCounter) WindowSize() time.Duration {
return r.a.WindowSize()
}
func (r *RatioCounter) ProcessedCount() int64 {
return r.CountA() + r.CountB()
}
func (r *RatioCounter) Ratio() float64 {
a := r.a.Count()
b := r.b.Count()
// No data, return ok
if a+b == 0 {
return 0
}
return float64(a) / float64(a+b)
}
func (r *RatioCounter) IncA(v int) {
r.a.Inc(v)
}
func (r *RatioCounter) IncB(v int) {
r.b.Inc(v)
}
type TestMeter struct {
Rate float64
NotReady bool
WindowSize time.Duration
}
func (tm *TestMeter) GetWindowSize() time.Duration {
return tm.WindowSize
}
func (tm *TestMeter) IsReady() bool {
return !tm.NotReady
}
func (tm *TestMeter) GetRate() float64 {
return tm.Rate
}

259
vendor/github.com/vulcand/oxy/memmetrics/roundtrip.go generated vendored Normal file
View file

@ -0,0 +1,259 @@
package memmetrics
import (
"errors"
"net/http"
"sync"
"time"
"github.com/mailgun/timetools"
)
// RTMetrics provides aggregated performance metrics for HTTP requests processing
// such as round trip latency, response codes counters network error and total requests.
// all counters are collected as rolling window counters with defined precision, histograms
// are a rolling window histograms with defined precision as well.
// See RTOptions for more detail on parameters.
type RTMetrics struct {
total *RollingCounter
netErrors *RollingCounter
statusCodes map[int]*RollingCounter
statusCodesLock sync.RWMutex
histogram *RollingHDRHistogram
newCounter NewCounterFn
newHist NewRollingHistogramFn
clock timetools.TimeProvider
}
type rrOptSetter func(r *RTMetrics) error
type NewRTMetricsFn func() (*RTMetrics, error)
type NewCounterFn func() (*RollingCounter, error)
type NewRollingHistogramFn func() (*RollingHDRHistogram, error)
func RTCounter(new NewCounterFn) rrOptSetter {
return func(r *RTMetrics) error {
r.newCounter = new
return nil
}
}
func RTHistogram(new NewRollingHistogramFn) rrOptSetter {
return func(r *RTMetrics) error {
r.newHist = new
return nil
}
}
func RTClock(clock timetools.TimeProvider) rrOptSetter {
return func(r *RTMetrics) error {
r.clock = clock
return nil
}
}
// NewRTMetrics returns new instance of metrics collector.
func NewRTMetrics(settings ...rrOptSetter) (*RTMetrics, error) {
m := &RTMetrics{
statusCodes: make(map[int]*RollingCounter),
statusCodesLock: sync.RWMutex{},
}
for _, s := range settings {
if err := s(m); err != nil {
return nil, err
}
}
if m.clock == nil {
m.clock = &timetools.RealTime{}
}
if m.newCounter == nil {
m.newCounter = func() (*RollingCounter, error) {
return NewCounter(counterBuckets, counterResolution, CounterClock(m.clock))
}
}
if m.newHist == nil {
m.newHist = func() (*RollingHDRHistogram, error) {
return NewRollingHDRHistogram(histMin, histMax, histSignificantFigures, histPeriod, histBuckets, RollingClock(m.clock))
}
}
h, err := m.newHist()
if err != nil {
return nil, err
}
netErrors, err := m.newCounter()
if err != nil {
return nil, err
}
total, err := m.newCounter()
if err != nil {
return nil, err
}
m.histogram = h
m.netErrors = netErrors
m.total = total
return m, nil
}
func (m *RTMetrics) CounterWindowSize() time.Duration {
return m.total.WindowSize()
}
// GetNetworkErrorRatio calculates the amont of network errors such as time outs and dropped connection
// that occured in the given time window compared to the total requests count.
func (m *RTMetrics) NetworkErrorRatio() float64 {
if m.total.Count() == 0 {
return 0
}
return float64(m.netErrors.Count()) / float64(m.total.Count())
}
// GetResponseCodeRatio calculates ratio of count(startA to endA) / count(startB to endB)
func (m *RTMetrics) ResponseCodeRatio(startA, endA, startB, endB int) float64 {
a := int64(0)
b := int64(0)
m.statusCodesLock.RLock()
defer m.statusCodesLock.RUnlock()
for code, v := range m.statusCodes {
if code < endA && code >= startA {
a += v.Count()
}
if code < endB && code >= startB {
b += v.Count()
}
}
if b != 0 {
return float64(a) / float64(b)
}
return 0
}
func (m *RTMetrics) Append(other *RTMetrics) error {
if m == other {
return errors.New("RTMetrics cannot append to self")
}
if err := m.total.Append(other.total); err != nil {
return err
}
if err := m.netErrors.Append(other.netErrors); err != nil {
return err
}
m.statusCodesLock.Lock()
defer m.statusCodesLock.Unlock()
other.statusCodesLock.RLock()
defer other.statusCodesLock.RUnlock()
for code, c := range other.statusCodes {
o, ok := m.statusCodes[code]
if ok {
if err := o.Append(c); err != nil {
return err
}
} else {
m.statusCodes[code] = c.Clone()
}
}
return m.histogram.Append(other.histogram)
}
func (m *RTMetrics) Record(code int, duration time.Duration) {
m.total.Inc(1)
if code == http.StatusGatewayTimeout || code == http.StatusBadGateway {
m.netErrors.Inc(1)
}
m.recordStatusCode(code)
m.recordLatency(duration)
}
// GetTotalCount returns total count of processed requests collected.
func (m *RTMetrics) TotalCount() int64 {
return m.total.Count()
}
// GetNetworkErrorCount returns total count of processed requests observed
func (m *RTMetrics) NetworkErrorCount() int64 {
return m.netErrors.Count()
}
// GetStatusCodesCounts returns map with counts of the response codes
func (m *RTMetrics) StatusCodesCounts() map[int]int64 {
sc := make(map[int]int64)
m.statusCodesLock.RLock()
defer m.statusCodesLock.RUnlock()
for k, v := range m.statusCodes {
if v.Count() != 0 {
sc[k] = v.Count()
}
}
return sc
}
// GetLatencyHistogram computes and returns resulting histogram with latencies observed.
func (m *RTMetrics) LatencyHistogram() (*HDRHistogram, error) {
return m.histogram.Merged()
}
func (m *RTMetrics) Reset() {
m.histogram.Reset()
m.total.Reset()
m.netErrors.Reset()
m.statusCodesLock.Lock()
defer m.statusCodesLock.Unlock()
m.statusCodes = make(map[int]*RollingCounter)
}
func (m *RTMetrics) recordNetError() error {
m.netErrors.Inc(1)
return nil
}
func (m *RTMetrics) recordLatency(d time.Duration) error {
return m.histogram.RecordLatencies(d, 1)
}
func (m *RTMetrics) recordStatusCode(statusCode int) error {
m.statusCodesLock.RLock()
if c, ok := m.statusCodes[statusCode]; ok {
c.Inc(1)
m.statusCodesLock.RUnlock()
return nil
}
m.statusCodesLock.RUnlock()
m.statusCodesLock.Lock()
defer m.statusCodesLock.Unlock()
// Check if another goroutine has written our counter already
if c, ok := m.statusCodes[statusCode]; ok {
c.Inc(1)
return nil
}
c, err := m.newCounter()
if err != nil {
return err
}
c.Inc(1)
m.statusCodes[statusCode] = c
return nil
}
const (
counterBuckets = 10
counterResolution = time.Second
histMin = 1
histMax = 3600000000 // 1 hour in microseconds
histSignificantFigures = 2 // signigicant figures (1% precision)
histBuckets = 6 // number of sub-histograms in a rolling histogram
histPeriod = 10 * time.Second // roll time
)

465
vendor/github.com/vulcand/oxy/roundrobin/rebalancer.go generated vendored Normal file
View file

@ -0,0 +1,465 @@
package roundrobin
import (
"fmt"
"net/http"
"net/url"
"sync"
"time"
"github.com/mailgun/timetools"
"github.com/vulcand/oxy/memmetrics"
"github.com/vulcand/oxy/utils"
)
// RebalancerOption - functional option setter for rebalancer
type RebalancerOption func(*Rebalancer) error
// Meter measures server peformance and returns it's relative value via rating
type Meter interface {
Rating() float64
Record(int, time.Duration)
IsReady() bool
}
type NewMeterFn func() (Meter, error)
// Rebalancer increases weights on servers that perform better than others. It also rolls back to original weights
// if the servers have changed. It is designed as a wrapper on top of the roundrobin.
type Rebalancer struct {
// mutex
mtx *sync.Mutex
// As usual, control time in tests
clock timetools.TimeProvider
// Time that freezes state machine to accumulate stats after updating the weights
backoffDuration time.Duration
// Timer is set to give probing some time to take place
timer time.Time
// server records that remember original weights
servers []*rbServer
// next is internal load balancer next in chain
next balancerHandler
// errHandler is HTTP handler called in case of errors
errHandler utils.ErrorHandler
log utils.Logger
ratings []float64
// creates new meters
newMeter NewMeterFn
// sticky session object
ss *StickySession
}
func RebalancerLogger(log utils.Logger) RebalancerOption {
return func(r *Rebalancer) error {
r.log = log
return nil
}
}
func RebalancerClock(clock timetools.TimeProvider) RebalancerOption {
return func(r *Rebalancer) error {
r.clock = clock
return nil
}
}
func RebalancerBackoff(d time.Duration) RebalancerOption {
return func(r *Rebalancer) error {
r.backoffDuration = d
return nil
}
}
func RebalancerMeter(newMeter NewMeterFn) RebalancerOption {
return func(r *Rebalancer) error {
r.newMeter = newMeter
return nil
}
}
// RebalancerErrorHandler is a functional argument that sets error handler of the server
func RebalancerErrorHandler(h utils.ErrorHandler) RebalancerOption {
return func(r *Rebalancer) error {
r.errHandler = h
return nil
}
}
func RebalancerStickySession(ss *StickySession) RebalancerOption {
return func(r *Rebalancer) error {
r.ss = ss
return nil
}
}
func NewRebalancer(handler balancerHandler, opts ...RebalancerOption) (*Rebalancer, error) {
rb := &Rebalancer{
mtx: &sync.Mutex{},
next: handler,
ss: nil,
}
for _, o := range opts {
if err := o(rb); err != nil {
return nil, err
}
}
if rb.clock == nil {
rb.clock = &timetools.RealTime{}
}
if rb.backoffDuration == 0 {
rb.backoffDuration = 10 * time.Second
}
if rb.log == nil {
rb.log = &utils.NOPLogger{}
}
if rb.newMeter == nil {
rb.newMeter = func() (Meter, error) {
rc, err := memmetrics.NewRatioCounter(10, time.Second, memmetrics.RatioClock(rb.clock))
if err != nil {
return nil, err
}
return &codeMeter{
r: rc,
codeS: http.StatusInternalServerError,
codeE: http.StatusGatewayTimeout + 1,
}, nil
}
}
if rb.errHandler == nil {
rb.errHandler = utils.DefaultHandler
}
return rb, nil
}
func (rb *Rebalancer) Servers() []*url.URL {
rb.mtx.Lock()
defer rb.mtx.Unlock()
return rb.next.Servers()
}
func (rb *Rebalancer) ServeHTTP(w http.ResponseWriter, req *http.Request) {
pw := &utils.ProxyWriter{W: w}
start := rb.clock.UtcNow()
// make shallow copy of request before changing anything to avoid side effects
newReq := *req
stuck := false
if rb.ss != nil {
cookie_url, present, err := rb.ss.GetBackend(&newReq, rb.Servers())
if err != nil {
rb.errHandler.ServeHTTP(w, req, err)
return
}
if present {
newReq.URL = cookie_url
stuck = true
}
}
if !stuck {
url, err := rb.next.NextServer()
if err != nil {
rb.errHandler.ServeHTTP(w, req, err)
return
}
if rb.ss != nil {
rb.ss.StickBackend(url, &w)
}
newReq.URL = url
}
rb.next.Next().ServeHTTP(pw, &newReq)
rb.recordMetrics(newReq.URL, pw.Code, rb.clock.UtcNow().Sub(start))
rb.adjustWeights()
}
func (rb *Rebalancer) recordMetrics(u *url.URL, code int, latency time.Duration) {
rb.mtx.Lock()
defer rb.mtx.Unlock()
if srv, i := rb.findServer(u); i != -1 {
srv.meter.Record(code, latency)
}
}
func (rb *Rebalancer) reset() {
for _, s := range rb.servers {
s.curWeight = s.origWeight
rb.next.UpsertServer(s.url, Weight(s.origWeight))
}
rb.timer = rb.clock.UtcNow().Add(-1 * time.Second)
rb.ratings = make([]float64, len(rb.servers))
}
func (rb *Rebalancer) Wrap(next balancerHandler) error {
if rb.next != nil {
return fmt.Errorf("already bound to %T", rb.next)
}
rb.next = next
return nil
}
func (rb *Rebalancer) UpsertServer(u *url.URL, options ...ServerOption) error {
rb.mtx.Lock()
defer rb.mtx.Unlock()
if err := rb.next.UpsertServer(u, options...); err != nil {
return err
}
weight, _ := rb.next.ServerWeight(u)
if err := rb.upsertServer(u, weight); err != nil {
rb.next.RemoveServer(u)
return err
}
rb.reset()
return nil
}
func (rb *Rebalancer) RemoveServer(u *url.URL) error {
rb.mtx.Lock()
defer rb.mtx.Unlock()
return rb.removeServer(u)
}
func (rb *Rebalancer) removeServer(u *url.URL) error {
_, i := rb.findServer(u)
if i == -1 {
return fmt.Errorf("%v not found", u)
}
if err := rb.next.RemoveServer(u); err != nil {
return err
}
rb.servers = append(rb.servers[:i], rb.servers[i+1:]...)
rb.reset()
return nil
}
func (rb *Rebalancer) upsertServer(u *url.URL, weight int) error {
if s, i := rb.findServer(u); i != -1 {
s.origWeight = weight
}
meter, err := rb.newMeter()
if err != nil {
return err
}
rbSrv := &rbServer{
url: utils.CopyURL(u),
origWeight: weight,
curWeight: weight,
meter: meter,
}
rb.servers = append(rb.servers, rbSrv)
return nil
}
func (r *Rebalancer) findServer(u *url.URL) (*rbServer, int) {
if len(r.servers) == 0 {
return nil, -1
}
for i, s := range r.servers {
if sameURL(u, s.url) {
return s, i
}
}
return nil, -1
}
// Called on every load balancer ServeHTTP call, returns the suggested weights
// on every call, can adjust weights if needed.
func (rb *Rebalancer) adjustWeights() {
rb.mtx.Lock()
defer rb.mtx.Unlock()
// In this case adjusting weights would have no effect, so do nothing
if len(rb.servers) < 2 {
return
}
// Metrics are not ready
if !rb.metricsReady() {
return
}
if !rb.timerExpired() {
return
}
if rb.markServers() {
if rb.setMarkedWeights() {
rb.setTimer()
}
} else { // No servers that are different by their quality, so converge weights
if rb.convergeWeights() {
rb.setTimer()
}
}
}
func (rb *Rebalancer) applyWeights() {
for _, srv := range rb.servers {
rb.log.Infof("upsert server %v, weight %v", srv.url, srv.curWeight)
rb.next.UpsertServer(srv.url, Weight(srv.curWeight))
}
}
func (rb *Rebalancer) setMarkedWeights() bool {
changed := false
// Increase weights on servers marked as good
for _, srv := range rb.servers {
if srv.good {
weight := increase(srv.curWeight)
if weight <= FSMMaxWeight {
rb.log.Infof("increasing weight of %v from %v to %v", srv.url, srv.curWeight, weight)
srv.curWeight = weight
changed = true
}
}
}
if changed {
rb.normalizeWeights()
rb.applyWeights()
return true
}
return false
}
func (rb *Rebalancer) setTimer() {
rb.timer = rb.clock.UtcNow().Add(rb.backoffDuration)
}
func (rb *Rebalancer) timerExpired() bool {
return rb.timer.Before(rb.clock.UtcNow())
}
func (rb *Rebalancer) metricsReady() bool {
for _, s := range rb.servers {
if !s.meter.IsReady() {
return false
}
}
return true
}
// markServers splits servers into two groups of servers with bad and good failure rate.
// It does compare relative performances of the servers though, so if all servers have approximately the same error rate
// this function returns the result as if all servers are equally good.
func (rb *Rebalancer) markServers() bool {
for i, srv := range rb.servers {
rb.ratings[i] = srv.meter.Rating()
}
g, b := memmetrics.SplitFloat64(splitThreshold, 0, rb.ratings)
for i, srv := range rb.servers {
if g[rb.ratings[i]] {
srv.good = true
} else {
srv.good = false
}
}
if len(g) != 0 && len(b) != 0 {
rb.log.Infof("bad: %v good: %v, ratings: %v", b, g, rb.ratings)
}
return len(g) != 0 && len(b) != 0
}
func (rb *Rebalancer) convergeWeights() bool {
// If we have previoulsy changed servers try to restore weights to the original state
changed := false
for _, s := range rb.servers {
if s.origWeight == s.curWeight {
continue
}
changed = true
newWeight := decrease(s.origWeight, s.curWeight)
rb.log.Infof("decreasing weight of %v from %v to %v", s.url, s.curWeight, newWeight)
s.curWeight = newWeight
}
if !changed {
return false
}
rb.normalizeWeights()
rb.applyWeights()
return true
}
func (rb *Rebalancer) weightsGcd() int {
divisor := -1
for _, w := range rb.servers {
if divisor == -1 {
divisor = w.curWeight
} else {
divisor = gcd(divisor, w.curWeight)
}
}
return divisor
}
func (rb *Rebalancer) normalizeWeights() {
gcd := rb.weightsGcd()
if gcd <= 1 {
return
}
for _, s := range rb.servers {
s.curWeight = s.curWeight / gcd
}
}
func increase(weight int) int {
return weight * FSMGrowFactor
}
func decrease(target, current int) int {
adjusted := current / FSMGrowFactor
if adjusted < target {
return target
} else {
return adjusted
}
}
// rebalancer server record that keeps track of the original weight supplied by user
type rbServer struct {
url *url.URL
origWeight int // original weight supplied by user
curWeight int // current weight
good bool
meter Meter
}
const (
// This is the maximum weight that handler will set for the server
FSMMaxWeight = 4096
// Multiplier for the server weight
FSMGrowFactor = 4
)
type codeMeter struct {
r *memmetrics.RatioCounter
codeS int
codeE int
}
func (n *codeMeter) Rating() float64 {
return n.r.Ratio()
}
func (n *codeMeter) Record(code int, d time.Duration) {
if code >= n.codeS && code < n.codeE {
n.r.IncA(1)
} else {
n.r.IncB(1)
}
}
func (n *codeMeter) IsReady() bool {
return n.r.IsReady()
}
// splitThreshold tells how far the value should go from the median + median absolute deviation before it is considered an outlier
const splitThreshold = 1.5

297
vendor/github.com/vulcand/oxy/roundrobin/rr.go generated vendored Normal file
View file

@ -0,0 +1,297 @@
// package roundrobin implements dynamic weighted round robin load balancer http handler
package roundrobin
import (
"fmt"
"net/http"
"net/url"
"sync"
"github.com/vulcand/oxy/utils"
)
// Weight is an optional functional argument that sets weight of the server
func Weight(w int) ServerOption {
return func(s *server) error {
if w < 0 {
return fmt.Errorf("Weight should be >= 0")
}
s.weight = w
return nil
}
}
// ErrorHandler is a functional argument that sets error handler of the server
func ErrorHandler(h utils.ErrorHandler) LBOption {
return func(s *RoundRobin) error {
s.errHandler = h
return nil
}
}
func EnableStickySession(ss *StickySession) LBOption {
return func(s *RoundRobin) error {
s.ss = ss
return nil
}
}
type RoundRobin struct {
mutex *sync.Mutex
next http.Handler
errHandler utils.ErrorHandler
// Current index (starts from -1)
index int
servers []*server
currentWeight int
ss *StickySession
}
func New(next http.Handler, opts ...LBOption) (*RoundRobin, error) {
rr := &RoundRobin{
next: next,
index: -1,
mutex: &sync.Mutex{},
servers: []*server{},
ss: nil,
}
for _, o := range opts {
if err := o(rr); err != nil {
return nil, err
}
}
if rr.errHandler == nil {
rr.errHandler = utils.DefaultHandler
}
return rr, nil
}
func (r *RoundRobin) Next() http.Handler {
return r.next
}
func (r *RoundRobin) ServeHTTP(w http.ResponseWriter, req *http.Request) {
// make shallow copy of request before chaning anything to avoid side effects
newReq := *req
stuck := false
if r.ss != nil {
cookie_url, present, err := r.ss.GetBackend(&newReq, r.Servers())
if err != nil {
r.errHandler.ServeHTTP(w, req, err)
return
}
if present {
newReq.URL = cookie_url
stuck = true
}
}
if !stuck {
url, err := r.NextServer()
if err != nil {
r.errHandler.ServeHTTP(w, req, err)
return
}
if r.ss != nil {
r.ss.StickBackend(url, &w)
}
newReq.URL = url
}
r.next.ServeHTTP(w, &newReq)
}
func (r *RoundRobin) NextServer() (*url.URL, error) {
srv, err := r.nextServer()
if err != nil {
return nil, err
}
return utils.CopyURL(srv.url), nil
}
func (r *RoundRobin) nextServer() (*server, error) {
r.mutex.Lock()
defer r.mutex.Unlock()
if len(r.servers) == 0 {
return nil, fmt.Errorf("no servers in the pool")
}
// The algo below may look messy, but is actually very simple
// it calculates the GCD and subtracts it on every iteration, what interleaves servers
// and allows us not to build an iterator every time we readjust weights
// GCD across all enabled servers
gcd := r.weightGcd()
// Maximum weight across all enabled servers
max := r.maxWeight()
for {
r.index = (r.index + 1) % len(r.servers)
if r.index == 0 {
r.currentWeight = r.currentWeight - gcd
if r.currentWeight <= 0 {
r.currentWeight = max
if r.currentWeight == 0 {
return nil, fmt.Errorf("all servers have 0 weight")
}
}
}
srv := r.servers[r.index]
if srv.weight >= r.currentWeight {
return srv, nil
}
}
// We did full circle and found no available servers
return nil, fmt.Errorf("no available servers")
}
func (r *RoundRobin) RemoveServer(u *url.URL) error {
r.mutex.Lock()
defer r.mutex.Unlock()
e, index := r.findServerByURL(u)
if e == nil {
return fmt.Errorf("server not found")
}
r.servers = append(r.servers[:index], r.servers[index+1:]...)
r.resetState()
return nil
}
func (rr *RoundRobin) Servers() []*url.URL {
rr.mutex.Lock()
defer rr.mutex.Unlock()
out := make([]*url.URL, len(rr.servers))
for i, srv := range rr.servers {
out[i] = srv.url
}
return out
}
func (rr *RoundRobin) ServerWeight(u *url.URL) (int, bool) {
rr.mutex.Lock()
defer rr.mutex.Unlock()
if s, _ := rr.findServerByURL(u); s != nil {
return s.weight, true
}
return -1, false
}
// In case if server is already present in the load balancer, returns error
func (rr *RoundRobin) UpsertServer(u *url.URL, options ...ServerOption) error {
rr.mutex.Lock()
defer rr.mutex.Unlock()
if u == nil {
return fmt.Errorf("server URL can't be nil")
}
if s, _ := rr.findServerByURL(u); s != nil {
for _, o := range options {
if err := o(s); err != nil {
return err
}
}
rr.resetState()
return nil
}
srv := &server{url: utils.CopyURL(u)}
for _, o := range options {
if err := o(srv); err != nil {
return err
}
}
if srv.weight == 0 {
srv.weight = defaultWeight
}
rr.servers = append(rr.servers, srv)
rr.resetState()
return nil
}
func (r *RoundRobin) resetIterator() {
r.index = -1
r.currentWeight = 0
}
func (r *RoundRobin) resetState() {
r.resetIterator()
}
func (r *RoundRobin) findServerByURL(u *url.URL) (*server, int) {
if len(r.servers) == 0 {
return nil, -1
}
for i, s := range r.servers {
if sameURL(u, s.url) {
return s, i
}
}
return nil, -1
}
func (rr *RoundRobin) maxWeight() int {
max := -1
for _, s := range rr.servers {
if s.weight > max {
max = s.weight
}
}
return max
}
func (rr *RoundRobin) weightGcd() int {
divisor := -1
for _, s := range rr.servers {
if divisor == -1 {
divisor = s.weight
} else {
divisor = gcd(divisor, s.weight)
}
}
return divisor
}
func gcd(a, b int) int {
for b != 0 {
a, b = b, a%b
}
return a
}
// ServerOption provides various options for server, e.g. weight
type ServerOption func(*server) error
// LBOption provides options for load balancer
type LBOption func(*RoundRobin) error
// Set additional parameters for the server can be supplied when adding server
type server struct {
url *url.URL
// Relative weight for the enpoint to other enpoints in the load balancer
weight int
}
const defaultWeight = 1
func sameURL(a, b *url.URL) bool {
return a.Path == b.Path && a.Host == b.Host && a.Scheme == b.Scheme
}
type balancerHandler interface {
Servers() []*url.URL
ServeHTTP(w http.ResponseWriter, req *http.Request)
ServerWeight(u *url.URL) (int, bool)
RemoveServer(u *url.URL) error
UpsertServer(u *url.URL, options ...ServerOption) error
NextServer() (*url.URL, error)
Next() http.Handler
}

View file

@ -0,0 +1,57 @@
// package stickysession is a mixin for load balancers that implements layer 7 (http cookie) session affinity
package roundrobin
import (
"net/http"
"net/url"
)
type StickySession struct {
cookiename string
}
func NewStickySession(c string) *StickySession {
return &StickySession{c}
}
// GetBackend returns the backend URL stored in the sticky cookie, iff the backend is still in the valid list of servers.
func (s *StickySession) GetBackend(req *http.Request, servers []*url.URL) (*url.URL, bool, error) {
cookie, err := req.Cookie(s.cookiename)
switch err {
case nil:
case http.ErrNoCookie:
return nil, false, nil
default:
return nil, false, err
}
s_url, err := url.Parse(cookie.Value)
if err != nil {
return nil, false, err
}
if s.isBackendAlive(s_url, servers) {
return s_url, true, nil
} else {
return nil, false, nil
}
}
func (s *StickySession) StickBackend(backend *url.URL, w *http.ResponseWriter) {
c := &http.Cookie{Name: s.cookiename, Value: backend.String()}
http.SetCookie(*w, c)
return
}
func (s *StickySession) isBackendAlive(needle *url.URL, haystack []*url.URL) bool {
if len(haystack) == 0 {
return false
}
for _, s := range haystack {
if sameURL(needle, s) {
return true
}
}
return false
}

351
vendor/github.com/vulcand/oxy/stream/stream.go generated vendored Normal file
View file

@ -0,0 +1,351 @@
/*
package stream provides http.Handler middleware that solves several problems when dealing with http requests:
Reads the entire request and response into buffer, optionally buffering it to disk for large requests.
Checks the limits for the requests and responses, rejecting in case if the limit was exceeded.
Changes request content-transfer-encoding from chunked and provides total size to the handlers.
Examples of a streaming middleware:
// sample HTTP handler
handler := http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
w.Write([]byte("hello"))
})
// Stream will read the body in buffer before passing the request to the handler
// calculate total size of the request and transform it from chunked encoding
// before passing to the server
stream.New(handler)
// This version will buffer up to 2MB in memory and will serialize any extra
// to a temporary file, if the request size exceeds 10MB it will reject the request
stream.New(handler,
stream.MemRequestBodyBytes(2 * 1024 * 1024),
stream.MaxRequestBodyBytes(10 * 1024 * 1024))
// Will do the same as above, but with responses
stream.New(handler,
stream.MemResponseBodyBytes(2 * 1024 * 1024),
stream.MaxResponseBodyBytes(10 * 1024 * 1024))
// Stream will replay the request if the handler returns error at least 3 times
// before returning the response
stream.New(handler, stream.Retry(`IsNetworkError() && Attempts() <= 2`))
*/
package stream
import (
"fmt"
"io"
"io/ioutil"
"net/http"
"github.com/mailgun/multibuf"
"github.com/vulcand/oxy/utils"
)
const (
// Store up to 1MB in RAM
DefaultMemBodyBytes = 1048576
// No limit by default
DefaultMaxBodyBytes = -1
// Maximum retry attempts
DefaultMaxRetryAttempts = 10
)
var errHandler utils.ErrorHandler = &SizeErrHandler{}
// Streamer is responsible for streaming requests and responses
// It buffers large reqeuests and responses to disk,
type Streamer struct {
maxRequestBodyBytes int64
memRequestBodyBytes int64
maxResponseBodyBytes int64
memResponseBodyBytes int64
retryPredicate hpredicate
next http.Handler
errHandler utils.ErrorHandler
log utils.Logger
}
// New returns a new streamer middleware. New() function supports optional functional arguments
func New(next http.Handler, setters ...optSetter) (*Streamer, error) {
strm := &Streamer{
next: next,
maxRequestBodyBytes: DefaultMaxBodyBytes,
memRequestBodyBytes: DefaultMemBodyBytes,
maxResponseBodyBytes: DefaultMaxBodyBytes,
memResponseBodyBytes: DefaultMemBodyBytes,
}
for _, s := range setters {
if err := s(strm); err != nil {
return nil, err
}
}
if strm.errHandler == nil {
strm.errHandler = errHandler
}
if strm.log == nil {
strm.log = utils.NullLogger
}
return strm, nil
}
type optSetter func(s *Streamer) error
// Retry provides a predicate that allows stream middleware to replay the request
// if it matches certain condition, e.g. returns special error code. Available functions are:
//
// Attempts() - limits the amount of retry attempts
// ResponseCode() - returns http response code
// IsNetworkError() - tests if response code is related to networking error
//
// Example of the predicate:
//
// `Attempts() <= 2 && ResponseCode() == 502`
//
func Retry(predicate string) optSetter {
return func(s *Streamer) error {
p, err := parseExpression(predicate)
if err != nil {
return err
}
s.retryPredicate = p
return nil
}
}
// Logger sets the logger that will be used by this middleware.
func Logger(l utils.Logger) optSetter {
return func(s *Streamer) error {
s.log = l
return nil
}
}
// ErrorHandler sets error handler of the server
func ErrorHandler(h utils.ErrorHandler) optSetter {
return func(s *Streamer) error {
s.errHandler = h
return nil
}
}
// MaxRequestBodyBytes sets the maximum request body size in bytes
func MaxRequestBodyBytes(m int64) optSetter {
return func(s *Streamer) error {
if m < 0 {
return fmt.Errorf("max bytes should be >= 0 got %d", m)
}
s.maxRequestBodyBytes = m
return nil
}
}
// MaxRequestBody bytes sets the maximum request body to be stored in memory
// stream middleware will serialize the excess to disk.
func MemRequestBodyBytes(m int64) optSetter {
return func(s *Streamer) error {
if m < 0 {
return fmt.Errorf("mem bytes should be >= 0 got %d", m)
}
s.memRequestBodyBytes = m
return nil
}
}
// MaxResponseBodyBytes sets the maximum request body size in bytes
func MaxResponseBodyBytes(m int64) optSetter {
return func(s *Streamer) error {
if m < 0 {
return fmt.Errorf("max bytes should be >= 0 got %d", m)
}
s.maxResponseBodyBytes = m
return nil
}
}
// MemResponseBodyBytes sets the maximum request body to be stored in memory
// stream middleware will serialize the excess to disk.
func MemResponseBodyBytes(m int64) optSetter {
return func(s *Streamer) error {
if m < 0 {
return fmt.Errorf("mem bytes should be >= 0 got %d", m)
}
s.memResponseBodyBytes = m
return nil
}
}
// Wrap sets the next handler to be called by stream handler.
func (s *Streamer) Wrap(next http.Handler) error {
s.next = next
return nil
}
func (s *Streamer) ServeHTTP(w http.ResponseWriter, req *http.Request) {
if err := s.checkLimit(req); err != nil {
s.log.Infof("request body over limit: %v", err)
s.errHandler.ServeHTTP(w, req, err)
return
}
// Read the body while keeping limits in mind. This reader controls the maximum bytes
// to read into memory and disk. This reader returns an error if the total request size exceeds the
// prefefined MaxSizeBytes. This can occur if we got chunked request, in this case ContentLength would be set to -1
// and the reader would be unbounded bufio in the http.Server
body, err := multibuf.New(req.Body, multibuf.MaxBytes(s.maxRequestBodyBytes), multibuf.MemBytes(s.memRequestBodyBytes))
if err != nil || body == nil {
s.errHandler.ServeHTTP(w, req, err)
return
}
// Set request body to buffered reader that can replay the read and execute Seek
// Note that we don't change the original request body as it's handled by the http server
// and we don'w want to mess with standard library
defer body.Close()
// We need to set ContentLength based on known request size. The incoming request may have been
// set without content length or using chunked TransferEncoding
totalSize, err := body.Size()
if err != nil {
s.log.Errorf("failed to get size, err %v", err)
s.errHandler.ServeHTTP(w, req, err)
return
}
outreq := s.copyRequest(req, body, totalSize)
attempt := 1
for {
// We create a special writer that will limit the response size, buffer it to disk if necessary
writer, err := multibuf.NewWriterOnce(multibuf.MaxBytes(s.maxResponseBodyBytes), multibuf.MemBytes(s.memResponseBodyBytes))
if err != nil {
s.errHandler.ServeHTTP(w, req, err)
return
}
// We are mimicking http.ResponseWriter to replace writer with our special writer
b := &bufferWriter{
header: make(http.Header),
buffer: writer,
}
defer b.Close()
s.next.ServeHTTP(b, outreq)
var reader multibuf.MultiReader
if b.expectBody(outreq) {
rdr, err := writer.Reader()
if err != nil {
s.log.Errorf("failed to read response, err %v", err)
s.errHandler.ServeHTTP(w, req, err)
return
}
defer rdr.Close()
reader = rdr
}
if (s.retryPredicate == nil || attempt > DefaultMaxRetryAttempts) ||
!s.retryPredicate(&context{r: req, attempt: attempt, responseCode: b.code, log: s.log}) {
utils.CopyHeaders(w.Header(), b.Header())
w.WriteHeader(b.code)
if reader != nil {
io.Copy(w, reader)
}
return
}
attempt += 1
if _, err := body.Seek(0, 0); err != nil {
s.log.Errorf("Failed to rewind: error: %v", err)
s.errHandler.ServeHTTP(w, req, err)
return
}
outreq = s.copyRequest(req, body, totalSize)
s.log.Infof("retry Request(%v %v) attempt %v", req.Method, req.URL, attempt)
}
}
func (s *Streamer) copyRequest(req *http.Request, body io.ReadCloser, bodySize int64) *http.Request {
o := *req
o.URL = utils.CopyURL(req.URL)
o.Header = make(http.Header)
utils.CopyHeaders(o.Header, req.Header)
o.ContentLength = bodySize
// remove TransferEncoding that could have been previously set because we have transformed the request from chunked encoding
o.TransferEncoding = []string{}
// http.Transport will close the request body on any error, we are controlling the close process ourselves, so we override the closer here
o.Body = ioutil.NopCloser(body)
return &o
}
func (s *Streamer) checkLimit(req *http.Request) error {
if s.maxRequestBodyBytes <= 0 {
return nil
}
if req.ContentLength > s.maxRequestBodyBytes {
return &multibuf.MaxSizeReachedError{MaxSize: s.maxRequestBodyBytes}
}
return nil
}
type bufferWriter struct {
header http.Header
code int
buffer multibuf.WriterOnce
}
// RFC2616 #4.4
func (b *bufferWriter) expectBody(r *http.Request) bool {
if r.Method == "HEAD" {
return false
}
if (b.code >= 100 && b.code < 200) || b.code == 204 || b.code == 304 {
return false
}
if b.header.Get("Content-Length") == "" && b.header.Get("Transfer-Encoding") == "" {
return false
}
if b.header.Get("Content-Length") == "0" {
return false
}
return true
}
func (b *bufferWriter) Close() error {
return b.buffer.Close()
}
func (b *bufferWriter) Header() http.Header {
return b.header
}
func (b *bufferWriter) Write(buf []byte) (int, error) {
return b.buffer.Write(buf)
}
// WriteHeader sets rw.Code.
func (b *bufferWriter) WriteHeader(code int) {
b.code = code
}
type SizeErrHandler struct {
}
func (e *SizeErrHandler) ServeHTTP(w http.ResponseWriter, req *http.Request, err error) {
if _, ok := err.(*multibuf.MaxSizeReachedError); ok {
w.WriteHeader(http.StatusRequestEntityTooLarge)
w.Write([]byte(http.StatusText(http.StatusRequestEntityTooLarge)))
return
}
utils.DefaultHandler.ServeHTTP(w, req, err)
}

227
vendor/github.com/vulcand/oxy/stream/threshold.go generated vendored Normal file
View file

@ -0,0 +1,227 @@
package stream
import (
"fmt"
"net/http"
"github.com/vulcand/oxy/utils"
"github.com/vulcand/predicate"
)
func IsValidExpression(expr string) bool {
_, err := parseExpression(expr)
return err == nil
}
type context struct {
r *http.Request
attempt int
responseCode int
log utils.Logger
}
type hpredicate func(*context) bool
// Parses expression in the go language into Failover predicates
func parseExpression(in string) (hpredicate, error) {
p, err := predicate.NewParser(predicate.Def{
Operators: predicate.Operators{
AND: and,
OR: or,
EQ: eq,
NEQ: neq,
LT: lt,
GT: gt,
LE: le,
GE: ge,
},
Functions: map[string]interface{}{
"RequestMethod": requestMethod,
"IsNetworkError": isNetworkError,
"Attempts": attempts,
"ResponseCode": responseCode,
},
})
if err != nil {
return nil, err
}
out, err := p.Parse(in)
if err != nil {
return nil, err
}
pr, ok := out.(hpredicate)
if !ok {
return nil, fmt.Errorf("expected predicate, got %T", out)
}
return pr, nil
}
type toString func(c *context) string
type toInt func(c *context) int
// RequestMethod returns mapper of the request to its method e.g. POST
func requestMethod() toString {
return func(c *context) string {
return c.r.Method
}
}
// Attempts returns mapper of the request to the number of proxy attempts
func attempts() toInt {
return func(c *context) int {
return c.attempt
}
}
// ResponseCode returns mapper of the request to the last response code, returns 0 if there was no response code.
func responseCode() toInt {
return func(c *context) int {
return c.responseCode
}
}
// IsNetworkError returns a predicate that returns true if last attempt ended with network error.
func isNetworkError() hpredicate {
return func(c *context) bool {
return c.responseCode == http.StatusBadGateway || c.responseCode == http.StatusGatewayTimeout
}
}
// and returns predicate by joining the passed predicates with logical 'and'
func and(fns ...hpredicate) hpredicate {
return func(c *context) bool {
for _, fn := range fns {
if !fn(c) {
return false
}
}
return true
}
}
// or returns predicate by joining the passed predicates with logical 'or'
func or(fns ...hpredicate) hpredicate {
return func(c *context) bool {
for _, fn := range fns {
if fn(c) {
return true
}
}
return false
}
}
// not creates negation of the passed predicate
func not(p hpredicate) hpredicate {
return func(c *context) bool {
return !p(c)
}
}
// eq returns predicate that tests for equality of the value of the mapper and the constant
func eq(m interface{}, value interface{}) (hpredicate, error) {
switch mapper := m.(type) {
case toString:
return stringEQ(mapper, value)
case toInt:
return intEQ(mapper, value)
}
return nil, fmt.Errorf("unsupported argument: %T", m)
}
// neq returns predicate that tests for inequality of the value of the mapper and the constant
func neq(m interface{}, value interface{}) (hpredicate, error) {
p, err := eq(m, value)
if err != nil {
return nil, err
}
return not(p), nil
}
// lt returns predicate that tests that value of the mapper function is less than the constant
func lt(m interface{}, value interface{}) (hpredicate, error) {
switch mapper := m.(type) {
case toInt:
return intLT(mapper, value)
}
return nil, fmt.Errorf("unsupported argument: %T", m)
}
// le returns predicate that tests that value of the mapper function is less or equal than the constant
func le(m interface{}, value interface{}) (hpredicate, error) {
l, err := lt(m, value)
if err != nil {
return nil, err
}
e, err := eq(m, value)
if err != nil {
return nil, err
}
return func(c *context) bool {
return l(c) || e(c)
}, nil
}
// gt returns predicate that tests that value of the mapper function is greater than the constant
func gt(m interface{}, value interface{}) (hpredicate, error) {
switch mapper := m.(type) {
case toInt:
return intGT(mapper, value)
}
return nil, fmt.Errorf("unsupported argument: %T", m)
}
// ge returns predicate that tests that value of the mapper function is less or equal than the constant
func ge(m interface{}, value interface{}) (hpredicate, error) {
g, err := gt(m, value)
if err != nil {
return nil, err
}
e, err := eq(m, value)
if err != nil {
return nil, err
}
return func(c *context) bool {
return g(c) || e(c)
}, nil
}
func stringEQ(m toString, val interface{}) (hpredicate, error) {
value, ok := val.(string)
if !ok {
return nil, fmt.Errorf("expected string, got %T", val)
}
return func(c *context) bool {
return m(c) == value
}, nil
}
func intEQ(m toInt, val interface{}) (hpredicate, error) {
value, ok := val.(int)
if !ok {
return nil, fmt.Errorf("expected int, got %T", val)
}
return func(c *context) bool {
return m(c) == value
}, nil
}
func intLT(m toInt, val interface{}) (hpredicate, error) {
value, ok := val.(int)
if !ok {
return nil, fmt.Errorf("expected int, got %T", val)
}
return func(c *context) bool {
return m(c) < value
}, nil
}
func intGT(m toInt, val interface{}) (hpredicate, error) {
value, ok := val.(int)
if !ok {
return nil, fmt.Errorf("expected int, got %T", val)
}
return func(c *context) bool {
return m(c) > value
}, nil
}

41
vendor/github.com/vulcand/oxy/utils/auth.go generated vendored Normal file
View file

@ -0,0 +1,41 @@
package utils
import (
"encoding/base64"
"fmt"
"strings"
)
type BasicAuth struct {
Username string
Password string
}
func (ba *BasicAuth) String() string {
encoded := base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("%s:%s", ba.Username, ba.Password)))
return fmt.Sprintf("Basic %s", encoded)
}
func ParseAuthHeader(header string) (*BasicAuth, error) {
values := strings.Fields(header)
if len(values) != 2 {
return nil, fmt.Errorf(fmt.Sprintf("Failed to parse header '%s'", header))
}
auth_type := strings.ToLower(values[0])
if auth_type != "basic" {
return nil, fmt.Errorf("Expected basic auth type, got '%s'", auth_type)
}
encoded_string := values[1]
decoded_string, err := base64.StdEncoding.DecodeString(encoded_string)
if err != nil {
return nil, fmt.Errorf("Failed to parse header '%s', base64 failed: %s", header, err)
}
values = strings.SplitN(string(decoded_string), ":", 2)
if len(values) != 2 {
return nil, fmt.Errorf("Failed to parse header '%s', expected separator ':'", header)
}
return &BasicAuth{Username: values[0], Password: values[1]}, nil
}

38
vendor/github.com/vulcand/oxy/utils/handler.go generated vendored Normal file
View file

@ -0,0 +1,38 @@
package utils
import (
"io"
"net"
"net/http"
)
type ErrorHandler interface {
ServeHTTP(w http.ResponseWriter, req *http.Request, err error)
}
var DefaultHandler ErrorHandler = &StdHandler{}
type StdHandler struct {
}
func (e *StdHandler) ServeHTTP(w http.ResponseWriter, req *http.Request, err error) {
statusCode := http.StatusInternalServerError
if e, ok := err.(net.Error); ok {
if e.Timeout() {
statusCode = http.StatusGatewayTimeout
} else {
statusCode = http.StatusBadGateway
}
} else if err == io.EOF {
statusCode = http.StatusBadGateway
}
w.WriteHeader(statusCode)
w.Write([]byte(http.StatusText(statusCode)))
}
type ErrorHandlerFunc func(http.ResponseWriter, *http.Request, error)
// ServeHTTP calls f(w, r).
func (f ErrorHandlerFunc) ServeHTTP(w http.ResponseWriter, r *http.Request, err error) {
f(w, r, err)
}

86
vendor/github.com/vulcand/oxy/utils/logging.go generated vendored Normal file
View file

@ -0,0 +1,86 @@
package utils
import (
"io"
"log"
)
var NullLogger Logger = &NOPLogger{}
// Logger defines a simple logging interface
type Logger interface {
Infof(format string, args ...interface{})
Warningf(format string, args ...interface{})
Errorf(format string, args ...interface{})
}
type FileLogger struct {
info *log.Logger
warn *log.Logger
error *log.Logger
}
func NewFileLogger(w io.Writer, lvl LogLevel) *FileLogger {
l := &FileLogger{}
flag := log.Ldate | log.Ltime | log.Lmicroseconds
if lvl <= INFO {
l.info = log.New(w, "INFO: ", flag)
}
if lvl <= WARN {
l.warn = log.New(w, "WARN: ", flag)
}
if lvl <= ERROR {
l.error = log.New(w, "ERR: ", flag)
}
return l
}
func (f *FileLogger) Infof(format string, args ...interface{}) {
if f.info == nil {
return
}
f.info.Printf(format, args...)
}
func (f *FileLogger) Warningf(format string, args ...interface{}) {
if f.warn == nil {
return
}
f.warn.Printf(format, args...)
}
func (f *FileLogger) Errorf(format string, args ...interface{}) {
if f.error == nil {
return
}
f.error.Printf(format, args...)
}
type NOPLogger struct {
}
func (*NOPLogger) Infof(format string, args ...interface{}) {
}
func (*NOPLogger) Warningf(format string, args ...interface{}) {
}
func (*NOPLogger) Errorf(format string, args ...interface{}) {
}
func (*NOPLogger) Info(string) {
}
func (*NOPLogger) Warning(string) {
}
func (*NOPLogger) Error(string) {
}
type LogLevel int
const (
INFO = iota
WARN
ERROR
)

138
vendor/github.com/vulcand/oxy/utils/netutils.go generated vendored Normal file
View file

@ -0,0 +1,138 @@
package utils
import (
"bufio"
"io"
"mime"
"net"
"net/http"
"net/url"
)
// ProxyWriter helps to capture response headers and status code
// from the ServeHTTP. It can be safely passed to ServeHTTP handler,
// wrapping the real response writer.
type ProxyWriter struct {
W http.ResponseWriter
Code int
}
func (p *ProxyWriter) StatusCode() int {
if p.Code == 0 {
// per contract standard lib will set this to http.StatusOK if not set
// by user, here we avoid the confusion by mirroring this logic
return http.StatusOK
}
return p.Code
}
func (p *ProxyWriter) Header() http.Header {
return p.W.Header()
}
func (p *ProxyWriter) Write(buf []byte) (int, error) {
return p.W.Write(buf)
}
func (p *ProxyWriter) WriteHeader(code int) {
p.Code = code
p.W.WriteHeader(code)
}
func (p *ProxyWriter) Flush() {
if f, ok := p.W.(http.Flusher); ok {
f.Flush()
}
}
func (p *ProxyWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {
return p.W.(http.Hijacker).Hijack()
}
func NewBufferWriter(w io.WriteCloser) *BufferWriter {
return &BufferWriter{
W: w,
H: make(http.Header),
}
}
type BufferWriter struct {
H http.Header
Code int
W io.WriteCloser
}
func (b *BufferWriter) Close() error {
return b.W.Close()
}
func (b *BufferWriter) Header() http.Header {
return b.H
}
func (b *BufferWriter) Write(buf []byte) (int, error) {
return b.W.Write(buf)
}
// WriteHeader sets rw.Code.
func (b *BufferWriter) WriteHeader(code int) {
b.Code = code
}
func (b *BufferWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {
return b.W.(http.Hijacker).Hijack()
}
type nopWriteCloser struct {
io.Writer
}
func (*nopWriteCloser) Close() error { return nil }
// NopCloser returns a WriteCloser with a no-op Close method wrapping
// the provided Writer w.
func NopWriteCloser(w io.Writer) io.WriteCloser {
return &nopWriteCloser{w}
}
// CopyURL provides update safe copy by avoiding shallow copying User field
func CopyURL(i *url.URL) *url.URL {
out := *i
if i.User != nil {
out.User = &(*i.User)
}
return &out
}
// CopyHeaders copies http headers from source to destination, it
// does not overide, but adds multiple headers
func CopyHeaders(dst, src http.Header) {
for k, vv := range src {
for _, v := range vv {
dst.Add(k, v)
}
}
}
// HasHeaders determines whether any of the header names is present in the http headers
func HasHeaders(names []string, headers http.Header) bool {
for _, h := range names {
if headers.Get(h) != "" {
return true
}
}
return false
}
// RemoveHeaders removes the header with the given names from the headers map
func RemoveHeaders(headers http.Header, names ...string) {
for _, h := range names {
headers.Del(h)
}
}
// Parse the MIME media type value of a header.
func GetHeaderMediaType(headers http.Header, name string) (string, error) {
mediatype, _, err := mime.ParseMediaType(headers.Get(name))
return mediatype, err
}

57
vendor/github.com/vulcand/oxy/utils/source.go generated vendored Normal file
View file

@ -0,0 +1,57 @@
package utils
import (
"fmt"
"net/http"
"strings"
)
// ExtractSource extracts the source from the request, e.g. that may be client ip, or particular header that
// identifies the source. amount stands for amount of connections the source consumes, usually 1 for connection limiters
// error should be returned when source can not be identified
type SourceExtractor interface {
Extract(req *http.Request) (token string, amount int64, err error)
}
type ExtractorFunc func(req *http.Request) (token string, amount int64, err error)
func (f ExtractorFunc) Extract(req *http.Request) (string, int64, error) {
return f(req)
}
type ExtractSource func(req *http.Request)
func NewExtractor(variable string) (SourceExtractor, error) {
if variable == "client.ip" {
return ExtractorFunc(extractClientIP), nil
}
if variable == "request.host" {
return ExtractorFunc(extractHost), nil
}
if strings.HasPrefix(variable, "request.header.") {
header := strings.TrimPrefix(variable, "request.header.")
if len(header) == 0 {
return nil, fmt.Errorf("Wrong header: %s", header)
}
return makeHeaderExtractor(header), nil
}
return nil, fmt.Errorf("Unsupported limiting variable: '%s'", variable)
}
func extractClientIP(req *http.Request) (string, int64, error) {
vals := strings.SplitN(req.RemoteAddr, ":", 2)
if len(vals[0]) == 0 {
return "", 0, fmt.Errorf("Failed to parse client IP: %v", req.RemoteAddr)
}
return vals[0], 1, nil
}
func extractHost(req *http.Request) (string, int64, error) {
return req.Host, 1, nil
}
func makeHeaderExtractor(header string) SourceExtractor {
return ExtractorFunc(func(req *http.Request) (string, int64, error) {
return req.Header.Get(header), 1, nil
})
}

202
vendor/github.com/vulcand/predicate/LICENSE generated vendored Normal file
View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

190
vendor/github.com/vulcand/predicate/parse.go generated vendored Normal file
View file

@ -0,0 +1,190 @@
package predicate
import (
"fmt"
"go/ast"
"go/parser"
"go/token"
"reflect"
"strconv"
)
func NewParser(d Def) (Parser, error) {
return &predicateParser{d: d}, nil
}
type predicateParser struct {
d Def
}
func (p *predicateParser) Parse(in string) (interface{}, error) {
expr, err := parser.ParseExpr(in)
if err != nil {
return nil, err
}
return p.parseNode(expr)
}
func (p *predicateParser) parseNode(node ast.Node) (interface{}, error) {
switch n := node.(type) {
case *ast.BasicLit:
return literalToValue(n)
case *ast.BinaryExpr:
x, err := p.parseNode(n.X)
if err != nil {
return nil, err
}
y, err := p.parseNode(n.Y)
if err != nil {
return nil, err
}
return p.joinPredicates(n.Op, x, y)
case *ast.CallExpr:
// We expect function that will return predicate
name, err := getIdentifier(n.Fun)
if err != nil {
return nil, err
}
fn, err := p.getFunction(name)
if err != nil {
return nil, err
}
arguments, err := collectLiterals(n.Args)
if err != nil {
return nil, err
}
return callFunction(fn, arguments)
case *ast.ParenExpr:
return p.parseNode(n.X)
}
return nil, fmt.Errorf("unsupported %T", node)
}
func (p *predicateParser) getFunction(name string) (interface{}, error) {
v, ok := p.d.Functions[name]
if !ok {
return nil, fmt.Errorf("unsupported function: %s", name)
}
return v, nil
}
func (p *predicateParser) joinPredicates(op token.Token, a, b interface{}) (interface{}, error) {
joinFn, err := p.getJoinFunction(op)
if err != nil {
return nil, err
}
return callFunction(joinFn, []interface{}{a, b})
}
func (p *predicateParser) getJoinFunction(op token.Token) (interface{}, error) {
var fn interface{}
switch op {
case token.LAND:
fn = p.d.Operators.AND
case token.LOR:
fn = p.d.Operators.OR
case token.GTR:
fn = p.d.Operators.GT
case token.GEQ:
fn = p.d.Operators.GE
case token.LSS:
fn = p.d.Operators.LT
case token.LEQ:
fn = p.d.Operators.LE
case token.EQL:
fn = p.d.Operators.EQ
case token.NEQ:
fn = p.d.Operators.NEQ
}
if fn == nil {
return nil, fmt.Errorf("%v is not supported", op)
}
return fn, nil
}
func getIdentifier(node ast.Node) (string, error) {
sexpr, ok := node.(*ast.SelectorExpr)
if ok {
id, ok := sexpr.X.(*ast.Ident)
if !ok {
return "", fmt.Errorf("expected selector identifier, got: %T", sexpr.X)
}
return fmt.Sprintf("%s.%s", id.Name, sexpr.Sel.Name), nil
}
id, ok := node.(*ast.Ident)
if !ok {
return "", fmt.Errorf("expected identifier, got: %T", node)
}
return id.Name, nil
}
func collectLiterals(nodes []ast.Expr) ([]interface{}, error) {
out := make([]interface{}, len(nodes))
for i, n := range nodes {
l, ok := n.(*ast.BasicLit)
if !ok {
return nil, fmt.Errorf("expected literal, got %T", n)
}
val, err := literalToValue(l)
if err != nil {
return nil, err
}
out[i] = val
}
return out, nil
}
func literalToValue(a *ast.BasicLit) (interface{}, error) {
switch a.Kind {
case token.FLOAT:
value, err := strconv.ParseFloat(a.Value, 64)
if err != nil {
return nil, fmt.Errorf("failed to parse argument: %s, error: %s", a.Value, err)
}
return value, nil
case token.INT:
value, err := strconv.Atoi(a.Value)
if err != nil {
return nil, fmt.Errorf("failed to parse argument: %s, error: %s", a.Value, err)
}
return value, nil
case token.STRING:
value, err := strconv.Unquote(a.Value)
if err != nil {
return nil, fmt.Errorf("failed to parse argument: %s, error: %s", a.Value, err)
}
return value, nil
}
return nil, fmt.Errorf("unsupported function argument type: '%v'", a.Kind)
}
func callFunction(f interface{}, args []interface{}) (v interface{}, err error) {
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("%s", r)
}
}()
arguments := make([]reflect.Value, len(args))
for i, a := range args {
arguments[i] = reflect.ValueOf(a)
}
fn := reflect.ValueOf(f)
ret := fn.Call(arguments)
switch len(ret) {
case 1:
return ret[0].Interface(), nil
case 2:
v, e := ret[0].Interface(), ret[1].Interface()
if e == nil {
return v, nil
}
err, ok := e.(error)
if !ok {
return nil, fmt.Errorf("expected error as a second return value, got %T", e)
}
return v, err
}
return nil, fmt.Errorf("expected at least one return argument for '%v'", fn)
}

71
vendor/github.com/vulcand/predicate/predicate.go generated vendored Normal file
View file

@ -0,0 +1,71 @@
/*
Predicate package used to create interpreted mini languages with Go syntax - mostly to define
various predicates for configuration, e.g. Latency() > 40 || ErrorRate() > 0.5.
Here's an example of fully functional predicate language to deal with division remainders:
// takes number and returns true or false
type numberPredicate func(v int) bool
// Converts one number to another
type numberMapper func(v int) int
// Function that creates predicate to test if the remainder is 0
func divisibleBy(divisor int) numberPredicate {
return func(v int) bool {
return v%divisor == 0
}
}
// Function - logical operator AND that combines predicates
func numberAND(a, b numberPredicate) numberPredicate {
return func(v int) bool {
return a(v) && b(v)
}
}
p, err := NewParser(Def{
Operators: Operators{
AND: numberAND,
},
Functions: map[string]interface{}{
"DivisibleBy": divisibleBy,
},
})
pr, err := p.Parse("DivisibleBy(2) && DivisibleBy(3)")
if err == nil {
fmt.Fatalf("Error: %v", err)
}
pr.(numberPredicate)(2) // false
pr.(numberPredicate)(3) // false
pr.(numberPredicate)(6) // true
*/
package predicate
// Def contains supported operators (e.g. LT, GT) and functions passed in as a map.
type Def struct {
Operators Operators
// Function matching is case sensitive, e.g. Len is different from len
Functions map[string]interface{}
}
// Operators contain functions for equality and logical comparison.
type Operators struct {
EQ interface{}
NEQ interface{}
LT interface{}
GT interface{}
LE interface{}
GE interface{}
OR interface{}
AND interface{}
}
// Parser takes the string with expression and calls the operators and functions.
type Parser interface {
Parse(string) (interface{}, error)
}

202
vendor/github.com/vulcand/route/LICENSE generated vendored Normal file
View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

88
vendor/github.com/vulcand/route/iter.go generated vendored Normal file
View file

@ -0,0 +1,88 @@
package route
import (
"fmt"
)
// charPos stores the position in the iterator
type charPos struct {
i int
si int
}
// charIter is a iterator over sequence of strings, returns byte-by-byte characters in string by string
type charIter struct {
i int // position in the current string
si int // position in the array of strings
seq []string // sequence of strings, e.g. ["GET", "/path"]
sep []byte // every string in the sequence has an associated separator used for trie matching, e.g. path uses '/' for separator
// so sequence ["a.host", "/path "]has acoompanying separators ['.', '/']
}
func newIter(seq []string, sep []byte) *charIter {
return &charIter{
i: 0,
si: 0,
seq: seq,
sep: sep,
}
}
func (r *charIter) level() int {
return r.si
}
func (r *charIter) String() string {
if r.isEnd() {
return "<end>"
}
return fmt.Sprintf("<%d:%v>", r.i, r.seq[r.si])
}
func (r *charIter) isEnd() bool {
return len(r.seq) == 0 || // no data at all
(r.si >= len(r.seq)-1 && r.i >= len(r.seq[r.si])) || // we are at the last char of last seq
(len(r.seq[r.si]) == 0) // empty input
}
func (r *charIter) position() charPos {
return charPos{i: r.i, si: r.si}
}
func (r *charIter) setPosition(p charPos) {
r.i = p.i
r.si = p.si
}
func (r *charIter) pushBack() {
if r.i == 0 && r.si == 0 { // this is start
return
} else if r.i == 0 && r.si != 0 { // this is start of the next string
r.si--
r.i = len(r.seq[r.si]) - 1
return
}
r.i--
}
// next returns current byte in the sequence, separator corresponding to that byte, and boolean idicator of whether it's the end of the sequence
func (r *charIter) next() (byte, byte, bool) {
// we have reached the last string in the index, end
if r.isEnd() {
return 0, 0, false
}
b := r.seq[r.si][r.i]
sep := r.sep[r.si]
r.i++
// current string index exceeded the last char of the current string
// move to the next string if it's present
if r.i >= len(r.seq[r.si]) && r.si < len(r.seq)-1 {
r.si++
r.i = 0
}
return b, sep, true
}

186
vendor/github.com/vulcand/route/mapper.go generated vendored Normal file
View file

@ -0,0 +1,186 @@
package route
import (
"net/http"
"strings"
)
// requestMapper maps the request to string e.g. maps request to it's hostname, or request to header
type requestMapper interface {
// separator returns the separator that makes sense for this request, e.g. / for urls or . for domains
separator() byte
// equals returns the equivalent mapper if the two mappers are equivalent, e.g. map to the same sequence
// mappers are also equivalent if one mapper is subset of another, e.g. combined mapper (host, path) is equivalent of (host) mapper
equivalent(requestMapper) requestMapper
// mapRequest maps request to string, e.g. request to it's URL path
mapRequest(r *http.Request) string
// newIter returns the iterator instead of string for stream matchers
newIter(r *http.Request) *charIter
}
type methodMapper struct {
}
func (m *methodMapper) separator() byte {
return methodSep
}
func (m *methodMapper) equivalent(o requestMapper) requestMapper {
_, ok := o.(*methodMapper)
if ok {
return m
}
return nil
}
func (m *methodMapper) mapRequest(r *http.Request) string {
return r.Method
}
func (m *methodMapper) newIter(r *http.Request) *charIter {
return newIter([]string{m.mapRequest(r)}, []byte{m.separator()})
}
type pathMapper struct {
}
func (m *pathMapper) separator() byte {
return pathSep
}
func (p *pathMapper) equivalent(o requestMapper) requestMapper {
_, ok := o.(*pathMapper)
if ok {
return p
}
return nil
}
func (p *pathMapper) newIter(r *http.Request) *charIter {
return newIter([]string{p.mapRequest(r)}, []byte{p.separator()})
}
func (p *pathMapper) mapRequest(r *http.Request) string {
return rawPath(r)
}
type hostMapper struct {
}
func (p *hostMapper) equivalent(o requestMapper) requestMapper {
_, ok := o.(*hostMapper)
if ok {
return p
}
return nil
}
func (m *hostMapper) separator() byte {
return domainSep
}
func (h *hostMapper) mapRequest(r *http.Request) string {
return strings.Split(strings.ToLower(r.Host), ":")[0]
}
func (p *hostMapper) newIter(r *http.Request) *charIter {
return newIter([]string{p.mapRequest(r)}, []byte{p.separator()})
}
type headerMapper struct {
header string
}
func (h *headerMapper) equivalent(o requestMapper) requestMapper {
hm, ok := o.(*headerMapper)
if ok && hm.header == h.header {
return h
}
return nil
}
func (m *headerMapper) separator() byte {
return headerSep
}
func (h *headerMapper) mapRequest(r *http.Request) string {
return r.Header.Get(h.header)
}
func (h *headerMapper) newIter(r *http.Request) *charIter {
return newIter([]string{h.mapRequest(r)}, []byte{h.separator()})
}
type seqMapper struct {
seq []requestMapper
}
func newSeqMapper(seq ...requestMapper) *seqMapper {
var out []requestMapper
for _, s := range seq {
switch m := s.(type) {
case *seqMapper:
out = append(out, m.seq...)
default:
out = append(out, s)
}
}
return &seqMapper{seq: out}
}
func (s *seqMapper) newIter(r *http.Request) *charIter {
out := make([]string, len(s.seq))
for i := range s.seq {
out[i] = s.seq[i].mapRequest(r)
}
seps := make([]byte, len(s.seq))
for i := range s.seq {
seps[i] = s.seq[i].separator()
}
return newIter(out, seps)
}
func (s *seqMapper) mapRequest(r *http.Request) string {
out := make([]string, len(s.seq))
for i := range s.seq {
out[i] = s.seq[i].mapRequest(r)
}
return strings.Join(out, "")
}
func (s *seqMapper) separator() byte {
return s.seq[0].separator()
}
func (s *seqMapper) equivalent(o requestMapper) requestMapper {
so, ok := o.(*seqMapper)
if !ok {
return nil
}
var longer, shorter *seqMapper
if len(s.seq) > len(so.seq) {
longer = s
shorter = so
} else {
longer = so
shorter = s
}
for i, _ := range longer.seq {
// shorter is subset of longer, return longer sequence mapper
if i >= len(shorter.seq)-1 {
return longer
}
if longer.seq[i].equivalent(shorter.seq[i]) == nil {
return nil
}
}
return longer
}
const (
pathSep = '/'
domainSep = '.'
headerSep = '/'
methodSep = ' '
)

155
vendor/github.com/vulcand/route/matcher.go generated vendored Normal file
View file

@ -0,0 +1,155 @@
package route
import (
"fmt"
"net/http"
"regexp"
"strings"
)
type matcher interface {
match(*http.Request) *match
setMatch(match *match)
canMerge(matcher) bool
merge(matcher) (matcher, error)
canChain(matcher) bool
chain(matcher) (matcher, error)
}
func hostTrieMatcher(hostname string) (matcher, error) {
return newTrieMatcher(strings.ToLower(hostname), &hostMapper{}, &match{})
}
func hostRegexpMatcher(hostname string) (matcher, error) {
return newRegexpMatcher(strings.ToLower(hostname), &hostMapper{}, &match{})
}
func methodTrieMatcher(method string) (matcher, error) {
return newTrieMatcher(method, &methodMapper{}, &match{})
}
func methodRegexpMatcher(method string) (matcher, error) {
return newRegexpMatcher(method, &methodMapper{}, &match{})
}
func pathTrieMatcher(path string) (matcher, error) {
return newTrieMatcher(path, &pathMapper{}, &match{})
}
func pathRegexpMatcher(path string) (matcher, error) {
return newRegexpMatcher(path, &pathMapper{}, &match{})
}
func headerTrieMatcher(name, value string) (matcher, error) {
return newTrieMatcher(value, &headerMapper{header: name}, &match{})
}
func headerRegexpMatcher(name, value string) (matcher, error) {
return newRegexpMatcher(value, &headerMapper{header: name}, &match{})
}
type match struct {
val interface{}
}
type andMatcher struct {
a matcher
b matcher
}
func newAndMatcher(a, b matcher) matcher {
if a.canChain(b) {
m, err := a.chain(b)
if err == nil {
return m
}
}
return &andMatcher{
a: a, b: b,
}
}
func (a *andMatcher) canChain(matcher) bool {
return false
}
func (a *andMatcher) chain(matcher) (matcher, error) {
return nil, fmt.Errorf("not supported")
}
func (a *andMatcher) String() string {
return fmt.Sprintf("andMatcher(%v, %v)", a.a, a.b)
}
func (a *andMatcher) setMatch(m *match) {
a.a.setMatch(m)
a.b.setMatch(m)
}
func (a *andMatcher) canMerge(o matcher) bool {
return false
}
func (a *andMatcher) merge(o matcher) (matcher, error) {
return nil, fmt.Errorf("Method not supported")
}
func (a *andMatcher) match(req *http.Request) *match {
result := a.a.match(req)
if result == nil {
return nil
}
return a.b.match(req)
}
// Regular expression matcher, takes a regular expression and requestMapper
type regexpMatcher struct {
// Uses this mapper to extract a string from a request to match against
mapper requestMapper
// Compiled regular expression
expr *regexp.Regexp
// match result
result *match
}
func (r *regexpMatcher) canChain(matcher) bool {
return false
}
func (r *regexpMatcher) chain(matcher) (matcher, error) {
return nil, fmt.Errorf("not supported")
}
func (m *regexpMatcher) String() string {
return fmt.Sprintf("regexpMatcher(%v)", m.expr)
}
func (m *regexpMatcher) setMatch(result *match) {
m.result = result
}
func newRegexpMatcher(expr string, mapper requestMapper, m *match) (matcher, error) {
r, err := regexp.Compile(expr)
if err != nil {
return nil, fmt.Errorf("Bad regular expression: %s %s", expr, err)
}
return &regexpMatcher{expr: r, mapper: mapper, result: m}, nil
}
func (m *regexpMatcher) canMerge(matcher) bool {
return false
}
func (m *regexpMatcher) merge(matcher) (matcher, error) {
return nil, fmt.Errorf("Method not supported")
}
func (m *regexpMatcher) match(req *http.Request) *match {
if m.expr.MatchString(m.mapper.mapRequest(req)) {
return m.result
}
return nil
}

74
vendor/github.com/vulcand/route/mux.go generated vendored Normal file
View file

@ -0,0 +1,74 @@
package route
import (
"fmt"
"net/http"
)
// Mux implements router compatible with http.Handler
type Mux struct {
// NotFound sets handler for routes that are not found
notFound http.Handler
router Router
}
// NewMux returns new Mux router
func NewMux() *Mux {
return &Mux{
router: New(),
notFound: &notFound{},
}
}
// Handle adds http handler for route expression
func (m *Mux) Handle(expr string, handler http.Handler) error {
return m.router.UpsertRoute(expr, handler)
}
// Handle adds http handler function for route expression
func (m *Mux) HandleFunc(expr string, handler func(http.ResponseWriter, *http.Request)) error {
return m.Handle(expr, http.HandlerFunc(handler))
}
func (m *Mux) Remove(expr string) error {
return m.router.RemoveRoute(expr)
}
// ServeHTTP routes the request and passes it to handler
func (m *Mux) ServeHTTP(w http.ResponseWriter, r *http.Request) {
h, err := m.router.Route(r)
if err != nil || h == nil {
m.notFound.ServeHTTP(w, r)
return
}
h.(http.Handler).ServeHTTP(w, r)
}
func (m *Mux) SetNotFound(n http.Handler) error {
if n == nil {
return fmt.Errorf("Not Found handler cannot be nil. Operation rejected.")
}
m.notFound = n
return nil
}
func (m *Mux) GetNotFound() http.Handler {
return m.notFound
}
func (m *Mux) IsValid(expr string) bool {
return IsValid(expr)
}
// NotFound is a generic http.Handler for request
type notFound struct {
}
// ServeHTTP returns a simple 404 Not found response
func (notFound) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusNotFound)
fmt.Fprint(w, "Not found")
}

47
vendor/github.com/vulcand/route/parse.go generated vendored Normal file
View file

@ -0,0 +1,47 @@
package route
import (
"fmt"
"github.com/vulcand/predicate"
)
// IsValid checks whether expression is valid
func IsValid(expr string) bool {
_, err := parse(expr, &match{})
return err == nil
}
func parse(expression string, result *match) (matcher, error) {
p, err := predicate.NewParser(predicate.Def{
Functions: map[string]interface{}{
"Host": hostTrieMatcher,
"HostRegexp": hostRegexpMatcher,
"Path": pathTrieMatcher,
"PathRegexp": pathRegexpMatcher,
"Method": methodTrieMatcher,
"MethodRegexp": methodRegexpMatcher,
"Header": headerTrieMatcher,
"HeaderRegexp": headerRegexpMatcher,
},
Operators: predicate.Operators{
AND: newAndMatcher,
},
})
if err != nil {
return nil, err
}
out, err := p.Parse(expression)
if err != nil {
return nil, err
}
m, ok := out.(matcher)
if !ok {
return nil, fmt.Errorf("unknown result type: %T", out)
}
m.setMatch(result)
return m, nil
}

195
vendor/github.com/vulcand/route/router.go generated vendored Normal file
View file

@ -0,0 +1,195 @@
/*
package route provides http package-compatible routing library. It can route http requests by by hostname, method, path and headers.
Route defines simple language for matching requests based on Go syntax. Route provides series of matchers that follow the syntax:
Matcher("value") // matches value using trie
Matcher("<string>.value") // uses trie-based matching for a.value and b.value
MatcherRegexp(".*value") // uses regexp-based matching
Host matcher:
Host("<subdomain>.localhost") // trie-based matcher for a.localhost, b.localhost, etc.
HostRegexp(".*localhost") // regexp based matcher
Path matcher:
Path("/hello/<value>") // trie-based matcher for raw request path
PathRegexp("/hello/.*") // regexp-based matcher for raw request path
Method matcher:
Method("GET") // trie-based matcher for request method
MethodRegexp("POST|PUT") // regexp based matcher for request method
Header matcher:
Header("Content-Type", "application/<subtype>") // trie-based matcher for headers
HeaderRegexp("Content-Type", "application/.*") // regexp based matcher for headers
Matchers can be combined using && operator:
Host("localhost") && Method("POST") && Path("/v1")
Route library will join the trie-based matchers into one trie matcher when possible, for example:
Host("localhost") && Method("POST") && Path("/v1")
Host("localhost") && Method("GET") && Path("/v2")
Will be combined into one trie for performance. If you add a third route:
Host("localhost") && Method("GET") && PathRegexp("/v2/.*")
It wont be joined ito the trie, and would be matched separatedly instead.
*/
package route
import (
"fmt"
"net/http"
"sort"
"sync"
)
// Router implements http request routing and operations. It is a generic router not conforming to http.Handler interface, to get a handler
// conforming to http.Handler interface, use Mux router instead.
type Router interface {
// GetRoute returns a route by a given expression, returns nil if expresison is not found
GetRoute(string) interface{}
// AddRoute adds a route to match by expression, returns error if the expression already defined, or route expression is incorrect
AddRoute(string, interface{}) error
// RemoveRoute removes a route for a given expression
RemoveRoute(string) error
// UpsertRoute updates an existing route or adds a new route by given expression
UpsertRoute(string, interface{}) error
// Route takes a request and matches it against requests, returns matched route in case if found, nil if there's no matching route or error in case of internal error.
Route(*http.Request) (interface{}, error)
}
type router struct {
mutex *sync.RWMutex
matchers []matcher
routes map[string]*match
}
// New creates a new Router instance
func New() Router {
return &router{
mutex: &sync.RWMutex{},
routes: make(map[string]*match),
}
}
func (e *router) GetRoute(expr string) interface{} {
e.mutex.RLock()
defer e.mutex.RUnlock()
res, ok := e.routes[expr]
if ok {
return res.val
}
return nil
}
func (e *router) AddRoute(expr string, val interface{}) error {
e.mutex.Lock()
defer e.mutex.Unlock()
if _, ok := e.routes[expr]; ok {
return fmt.Errorf("Expression '%s' already exists", expr)
}
result := &match{val: val}
if _, err := parse(expr, result); err != nil {
return err
}
e.routes[expr] = result
if err := e.compile(); err != nil {
delete(e.routes, expr)
return err
}
return nil
}
func (e *router) UpsertRoute(expr string, val interface{}) error {
e.mutex.Lock()
defer e.mutex.Unlock()
result := &match{val: val}
if _, err := parse(expr, result); err != nil {
return err
}
prev, existed := e.routes[expr]
e.routes[expr] = result
if err := e.compile(); err != nil {
if existed {
e.routes[expr] = prev
} else {
delete(e.routes, expr)
}
return err
}
return nil
}
func (e *router) compile() error {
var exprs = []string{}
for expr, _ := range e.routes {
exprs = append(exprs, expr)
}
sort.Sort(sort.Reverse(sort.StringSlice(exprs)))
matchers := []matcher{}
i := 0
for _, expr := range exprs {
result := e.routes[expr]
matcher, err := parse(expr, result)
if err != nil {
return err
}
// Merge the previous and new matcher if that's possible
if i > 0 && matchers[i-1].canMerge(matcher) {
m, err := matchers[i-1].merge(matcher)
if err != nil {
return err
}
matchers[i-1] = m
} else {
matchers = append(matchers, matcher)
i += 1
}
}
e.matchers = matchers
return nil
}
func (e *router) RemoveRoute(expr string) error {
e.mutex.Lock()
defer e.mutex.Unlock()
delete(e.routes, expr)
return e.compile()
}
func (e *router) Route(req *http.Request) (interface{}, error) {
e.mutex.RLock()
defer e.mutex.RUnlock()
if len(e.matchers) == 0 {
return nil, nil
}
for _, m := range e.matchers {
if l := m.match(req); l != nil {
return l.val, nil
}
}
return nil, nil
}

476
vendor/github.com/vulcand/route/trie.go generated vendored Normal file
View file

@ -0,0 +1,476 @@
package route
import (
"bytes"
"fmt"
"net/http"
"regexp"
"strings"
"unicode"
)
// Regular expression to match url parameters
var reParam *regexp.Regexp
func init() {
reParam = regexp.MustCompile("^<([^>]+)>")
}
// Trie http://en.wikipedia.org/wiki/Trie for url matching with support of named parameters
type trie struct {
root *trieNode
// mapper takes the request and returns sequence that can be matched
mapper requestMapper
}
func (t *trie) canChain(o matcher) bool {
_, ok := o.(*trie)
return ok
}
func (t *trie) chain(o matcher) (matcher, error) {
to, ok := o.(*trie)
if !ok {
return nil, fmt.Errorf("can chain only with other trie")
}
m := t.root.findMatchNode()
m.matches = nil
m.children = []*trieNode{to.root}
t.root.setLevel(-1)
return &trie{
root: t.root,
mapper: newSeqMapper(t.mapper, to.mapper),
}, nil
}
func (t *trie) String() string {
return fmt.Sprintf("trieMatcher()")
}
// Takes the expression with url and the node that corresponds to this expression and returns parsed trie
func newTrieMatcher(expression string, mapper requestMapper, result *match) (*trie, error) {
t := &trie{
mapper: mapper,
}
t.root = &trieNode{trie: t}
if len(expression) == 0 {
return nil, fmt.Errorf("Empty URL expression")
}
err := t.root.parseExpression(-1, expression, result)
if err != nil {
return nil, err
}
return t, nil
}
func (t *trie) setMatch(result *match) {
t.root.setMatch(result)
}
// Tries can merge with other tries
func (t *trie) canMerge(m matcher) bool {
ot, ok := m.(*trie)
return ok && t.mapper.equivalent(ot.mapper) != nil
}
// Merge takes the other trie and modifies itself to match the passed trie as well.
// Note that trie passed as a parameter can be only simple trie without multiple branches per node, e.g. a->b->c->
// Trie on the left is "accumulating" trie that grows.
func (p *trie) merge(m matcher) (matcher, error) {
other, ok := m.(*trie)
if !ok {
return nil, fmt.Errorf("Can't merge %T and %T", p, m)
}
mapper := p.mapper.equivalent(other.mapper)
if mapper == nil {
return nil, fmt.Errorf("Can't merge %T and %T", p, m)
}
root, err := p.root.merge(other.root)
if err != nil {
return nil, err
}
return &trie{root: root, mapper: mapper}, nil
}
// Takes the request and returns the location if the request path matches any of it's paths
// returns nil if none of the requests matches
func (p *trie) match(r *http.Request) *match {
if p.root == nil {
return nil
}
return p.root.match(p.mapper.newIter(r))
}
type trieNode struct {
trie *trie
// Matching character, can be empty in case if it's a root node
// or node with a pattern matcher
char byte
// Optional children of this node, can be empty if it's a leaf node
children []*trieNode
// If present, means that this node is a pattern matcher
patternMatcher patternMatcher
// If present it means this node contains potential match for a request, and this is a leaf node.
matches []*match
// For chained tries matching different parts of the request levels would increase for next chained trie nodes
level int
}
func (e *trieNode) setMatch(m *match) {
n := e.findMatchNode()
n.matches = []*match{m}
}
func (e *trieNode) setLevel(level int) {
if e.isRoot() {
level++
}
e.level = level
if len(e.matches) != 0 {
return
}
// Check for the match in child nodes
for _, c := range e.children {
c.setLevel(level)
}
}
func (e *trieNode) findMatchNode() *trieNode {
if len(e.matches) != 0 {
return e
}
// Check for the match in child nodes
for _, c := range e.children {
if n := c.findMatchNode(); n != nil {
return n
}
}
return nil
}
func (e *trieNode) isMatching() bool {
return len(e.matches) != 0
}
func (e *trieNode) isRoot() bool {
return e.char == byte(0) && e.patternMatcher == nil
}
func (e *trieNode) isPatternMatcher() bool {
return e.patternMatcher != nil
}
func (e *trieNode) isCharMatcher() bool {
return e.char != 0
}
func (e *trieNode) String() string {
self := ""
if e.patternMatcher != nil {
self = e.patternMatcher.String()
} else {
self = fmt.Sprintf("%c", e.char)
}
if e.isMatching() {
return fmt.Sprintf("match(%d:%s)", e.level, self)
} else if e.isRoot() {
return fmt.Sprintf("root(%d)", e.level)
} else {
return fmt.Sprintf("node(%d:%s)", e.level, self)
}
}
func (e *trieNode) equals(o *trieNode) bool {
return (e.level == o.level) && // we can merge nodes that are on the same level to avoid merges for different subtrie parts
(e.char == o.char) && // chars are equal
(e.patternMatcher == nil && o.patternMatcher == nil) || // both nodes have no matchers
((e.patternMatcher != nil && o.patternMatcher != nil) && e.patternMatcher.equals(o.patternMatcher)) // both nodes have equal matchers
}
func (e *trieNode) merge(o *trieNode) (*trieNode, error) {
children := make([]*trieNode, 0, len(e.children))
merged := make(map[*trieNode]bool)
// First, find the nodes with similar keys and merge them
for _, c := range e.children {
for _, c2 := range o.children {
// The nodes are equivalent, so we can merge them
if c.equals(c2) {
m, err := c.merge(c2)
if err != nil {
return nil, err
}
merged[c] = true
merged[c2] = true
children = append(children, m)
}
}
}
// Next, append the keys that haven't been merged
for _, c := range e.children {
if !merged[c] {
children = append(children, c)
}
}
for _, c := range o.children {
if !merged[c] {
children = append(children, c)
}
}
return &trieNode{
level: e.level,
trie: e.trie,
char: e.char,
children: children,
patternMatcher: e.patternMatcher,
matches: append(e.matches, o.matches...),
}, nil
}
func (p *trieNode) parseExpression(offset int, pattern string, m *match) error {
// We are the last element, so we are the matching node
if offset >= len(pattern)-1 {
p.matches = []*match{m}
return nil
}
// There's a next character that exists
patternMatcher, newOffset, err := parsePatternMatcher(offset+1, pattern)
// We have found the matcher, but the syntax or parameters are wrong
if err != nil {
return err
}
// Matcher was found
if patternMatcher != nil {
node := &trieNode{patternMatcher: patternMatcher, trie: p.trie}
p.children = []*trieNode{node}
return node.parseExpression(newOffset-1, pattern, m)
} else {
// Matcher was not found, next node is just a character
node := &trieNode{char: pattern[offset+1], trie: p.trie}
p.children = []*trieNode{node}
return node.parseExpression(offset+1, pattern, m)
}
}
func parsePatternMatcher(offset int, pattern string) (patternMatcher, int, error) {
if pattern[offset] != '<' {
return nil, -1, nil
}
rest := pattern[offset:]
match := reParam.FindStringSubmatchIndex(rest)
if len(match) == 0 {
return nil, -1, nil
}
// Split parsed matcher parameters separated by :
values := strings.Split(rest[match[2]:match[3]], ":")
// The common syntax is <matcherType:matcherArg1:matcherArg2>
matcherType := values[0]
matcherArgs := values[1:]
// In case if there's only one <param> is implicitly converted to <string:param>
if len(values) == 1 {
matcherType = "string"
matcherArgs = values
}
matcher, err := makeMatcher(matcherType, matcherArgs)
if err != nil {
return nil, offset, err
}
return matcher, offset + match[1], nil
}
type matchResult struct {
matcher patternMatcher
value interface{}
}
type patternMatcher interface {
getName() string
match(i *charIter) bool
equals(other patternMatcher) bool
String() string
}
func makeMatcher(matcherType string, matcherArgs []string) (patternMatcher, error) {
switch matcherType {
case "string":
return newStringMatcher(matcherArgs)
case "int":
return newIntMatcher(matcherArgs)
}
return nil, fmt.Errorf("unsupported matcher: %s", matcherType)
}
func newStringMatcher(args []string) (patternMatcher, error) {
if len(args) != 1 {
return nil, fmt.Errorf("expected only one parameter - variable name, got: %s", args)
}
return &stringMatcher{name: args[0]}, nil
}
type stringMatcher struct {
name string
}
func (s *stringMatcher) String() string {
return fmt.Sprintf("<string:%s>", s.name)
}
func (s *stringMatcher) getName() string {
return s.name
}
func (s *stringMatcher) match(i *charIter) bool {
s.grabValue(i)
return true
}
func (s *stringMatcher) equals(other patternMatcher) bool {
_, ok := other.(*stringMatcher)
return ok && other.getName() == s.getName()
}
func (s *stringMatcher) grabValue(i *charIter) {
for {
c, sep, ok := i.next()
if !ok {
return
}
if c == sep {
i.pushBack()
return
}
}
}
func newIntMatcher(args []string) (patternMatcher, error) {
if len(args) != 1 {
return nil, fmt.Errorf("expected only one parameter - variable name, got: %s", args)
}
return &intMatcher{name: args[0]}, nil
}
type intMatcher struct {
name string
}
func (s *intMatcher) String() string {
return fmt.Sprintf("<int:%s>", s.name)
}
func (s *intMatcher) getName() string {
return s.name
}
func (s *intMatcher) match(iter *charIter) bool {
// count stores amount of consumed characters so we know how many push
// backs to do in case there is no match
var count int
for {
c, sep, ok := iter.next()
count++
// if the current character is not a number:
// - it's either a separator that means it's a match
// - it's some other character that means it's not a match
if !unicode.IsDigit(rune(c)) {
if c == sep {
iter.pushBack()
return true
} else {
for i := 0; i < count; i++ {
iter.pushBack()
}
return false
}
}
// if it's the end of the string, it's a match
if !ok {
return true
}
}
}
func (s *intMatcher) equals(other patternMatcher) bool {
_, ok := other.(*intMatcher)
return ok && other.getName() == s.getName()
}
func (e *trieNode) matchNode(i *charIter) bool {
if i.level() != e.level {
return false
}
if e.isRoot() {
return true
}
if e.isPatternMatcher() {
return e.patternMatcher.match(i)
}
c, _, ok := i.next()
if !ok {
// we have reached the end
return false
}
if c != e.char {
// no match, so don't consume the character
i.pushBack()
return false
}
return true
}
func (e *trieNode) match(i *charIter) *match {
if !e.matchNode(i) {
return nil
}
// This is a leaf node and we are at the last character of the pattern
if len(e.matches) != 0 && i.isEnd() {
return e.matches[0]
}
// Check for the match in child nodes
for _, c := range e.children {
p := i.position()
if match := c.match(i); match != nil {
return match
}
i.setPosition(p)
}
// Child nodes did not match and we at the boundary
if len(e.matches) != 0 && i.level() > e.level {
return e.matches[0]
}
return nil
}
// printTrie is useful for debugging and test purposes, it outputs the formatted
// represenation of the trie
func printTrie(t *trie) string {
return printTrieNode(t.root)
}
func printTrieNode(e *trieNode) string {
out := &bytes.Buffer{}
printTrieNodeInner(out, e, 0)
return out.String()
}
func printTrieNodeInner(b *bytes.Buffer, e *trieNode, offset int) {
if offset == 0 {
fmt.Fprintf(b, "\n")
}
padding := strings.Repeat(" ", offset)
fmt.Fprintf(b, "%s%s\n", padding, e.String())
if len(e.children) != 0 {
for _, c := range e.children {
printTrieNodeInner(b, c, offset+1)
}
}
}

33
vendor/github.com/vulcand/route/utils.go generated vendored Normal file
View file

@ -0,0 +1,33 @@
package route
import (
"net/http"
"strings"
)
// RawPath returns escaped url path section
func rawPath(r *http.Request) string {
// If there are no escape symbols, don't extract raw path
if !strings.ContainsRune(r.RequestURI, '%') {
if len(r.URL.Path) == 0 {
return "/"
}
return r.URL.Path
}
path := r.RequestURI
if path == "" {
path = "/"
}
// This is absolute URI, split host and port
if strings.Contains(path, "://") {
vals := strings.SplitN(path, r.URL.Host, 2)
if len(vals) == 2 {
path = vals[1]
}
}
idx := strings.IndexRune(path, '?')
if idx == -1 {
return path
}
return path[:idx]
}

201
vendor/github.com/vulcand/vulcand/LICENSE generated vendored Normal file
View file

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,13 @@
package conntracker
import (
"net"
"net/http"
)
type ConnectionTracker interface {
RegisterStateChange(conn net.Conn, prev http.ConnState, cur http.ConnState)
Counts() ConnectionStats
}
type ConnectionStats map[http.ConnState]map[string]int64

20
vendor/github.com/vulcand/vulcand/main.go generated vendored Normal file
View file

@ -0,0 +1,20 @@
package main
import (
"fmt"
"os"
"runtime"
"github.com/vulcand/vulcand/plugin/registry"
"github.com/vulcand/vulcand/service"
)
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
if err := service.Run(registry.GetRegistry()); err != nil {
fmt.Printf("Service exited with error: %s\n", err)
os.Exit(255)
} else {
fmt.Println("Service exited gracefully")
}
}

146
vendor/github.com/vulcand/vulcand/plugin/middleware.go generated vendored Normal file
View file

@ -0,0 +1,146 @@
package plugin
import (
"encoding/json"
"fmt"
"github.com/codegangsta/cli"
"github.com/vulcand/route"
"github.com/vulcand/vulcand/conntracker"
"github.com/vulcand/vulcand/router"
"net/http"
"reflect"
)
// Middleware specification, used to construct new middlewares and plug them into CLI API and backends
type MiddlewareSpec struct {
Type string
// Reader function that returns a middleware from another middleware structure
FromOther interface{}
// Flags for CLI tool to generate interface
CliFlags []cli.Flag
// Function that construtcs a middleware from CLI parameters
FromCli CliReader
}
func (ms *MiddlewareSpec) FromJSON(data []byte) (Middleware, error) {
// Get a function's type
fnType := reflect.TypeOf(ms.FromOther)
// Create a pointer to the function's first argument
ptr := reflect.New(fnType.In(0)).Interface()
err := json.Unmarshal(data, &ptr)
if err != nil {
return nil, fmt.Errorf("failed to decode %T from JSON, error: %s", ptr, err)
}
// Now let's call the function to produce a middleware
fnVal := reflect.ValueOf(ms.FromOther)
results := fnVal.Call([]reflect.Value{reflect.ValueOf(ptr).Elem()})
m, out := results[0].Interface(), results[1].Interface()
if out != nil {
return nil, out.(error)
}
return m.(Middleware), nil
}
type Middleware interface {
NewHandler(http.Handler) (http.Handler, error)
}
// Reader constructs the middleware from the CLI interface
type CliReader func(c *cli.Context) (Middleware, error)
// Function that returns middleware spec by it's type
type SpecGetter func(string) *MiddlewareSpec
// Registry contains currently registered middlewares and used to support pluggable middlewares across all modules of the vulcand
type Registry struct {
specs []*MiddlewareSpec
notFound Middleware
router router.Router
connTracker conntracker.ConnectionTracker
}
func NewRegistry() *Registry {
return &Registry{
specs: []*MiddlewareSpec{},
router: route.NewMux(),
}
}
func (r *Registry) AddSpec(s *MiddlewareSpec) error {
if s == nil {
return fmt.Errorf("spec can not be nil")
}
if r.GetSpec(s.Type) != nil {
return fmt.Errorf("middleware of type %s already registered", s.Type)
}
if err := verifySignature(s.FromOther); err != nil {
return err
}
r.specs = append(r.specs, s)
return nil
}
func (r *Registry) GetSpec(middlewareType string) *MiddlewareSpec {
for _, s := range r.specs {
if s.Type == middlewareType {
return s
}
}
return nil
}
func (r *Registry) GetSpecs() []*MiddlewareSpec {
return r.specs
}
func (r *Registry) AddNotFoundMiddleware(notFound Middleware) error {
r.notFound = notFound
return nil
}
func (r *Registry) GetNotFoundMiddleware() Middleware {
return r.notFound
}
func (r *Registry) SetRouter(router router.Router) error {
r.router = router
return nil
}
func (r *Registry) GetRouter() router.Router {
return r.router
}
func (r *Registry) SetConnectionTracker(connTracker conntracker.ConnectionTracker) error {
r.connTracker = connTracker
return nil
}
func (r *Registry) GetConnectionTracker() conntracker.ConnectionTracker {
return r.connTracker
}
func verifySignature(fn interface{}) error {
t := reflect.TypeOf(fn)
if t == nil || t.Kind() != reflect.Func {
return fmt.Errorf("expected function, got %s", t)
}
if t.NumIn() != 1 {
return fmt.Errorf("expected function with one input argument, got %d", t.NumIn())
}
if t.In(0).Kind() != reflect.Struct {
return fmt.Errorf("function argument should be struct, got %s", t.In(0).Kind())
}
if t.NumOut() != 2 {
return fmt.Errorf("function should return 2 values, got %d", t.NumOut())
}
if !t.Out(0).AssignableTo(reflect.TypeOf((*Middleware)(nil)).Elem()) {
return fmt.Errorf("function first return value should be Middleware got, %s", t.Out(0))
}
if !t.Out(1).AssignableTo(reflect.TypeOf((*error)(nil)).Elem()) {
return fmt.Errorf("function second return value should be error got, %s", t.Out(1))
}
return nil
}

View file

@ -0,0 +1,206 @@
package rewrite
import (
"bytes"
"fmt"
"io"
"net/http"
"net/url"
"regexp"
"strconv"
"strings"
log "github.com/Sirupsen/logrus"
"github.com/codegangsta/cli"
"github.com/vulcand/oxy/utils"
"github.com/vulcand/vulcand/plugin"
)
const Type = "rewrite"
type Rewrite struct {
Regexp string
Replacement string
RewriteBody bool
Redirect bool
}
func NewRewrite(regex, replacement string, rewriteBody, redirect bool) (*Rewrite, error) {
return &Rewrite{regex, replacement, rewriteBody, redirect}, nil
}
func (rw *Rewrite) NewHandler(next http.Handler) (http.Handler, error) {
return newRewriteHandler(next, rw)
}
func (rw *Rewrite) String() string {
return fmt.Sprintf("regexp=%v, replacement=%v, rewriteBody=%v, redirect=%v",
rw.Regexp, rw.Replacement, rw.RewriteBody, rw.Redirect)
}
type rewriteHandler struct {
next http.Handler
errHandler utils.ErrorHandler
regexp *regexp.Regexp
replacement string
rewriteBody bool
redirect bool
}
func newRewriteHandler(next http.Handler, spec *Rewrite) (*rewriteHandler, error) {
re, err := regexp.Compile(spec.Regexp)
if err != nil {
return nil, err
}
return &rewriteHandler{
regexp: re,
replacement: spec.Replacement,
rewriteBody: spec.RewriteBody,
redirect: spec.Redirect,
next: next,
errHandler: utils.DefaultHandler,
}, nil
}
func (rw *rewriteHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
oldURL := rawURL(req)
// only continue if the Regexp param matches the URL
if !rw.regexp.MatchString(oldURL) {
rw.next.ServeHTTP(w, req)
return
}
// apply a rewrite regexp to the URL
newURL := rw.regexp.ReplaceAllString(oldURL, rw.replacement)
// replace any variables that may be in there
rewrittenURL := &bytes.Buffer{}
if err := ApplyString(newURL, rewrittenURL, req); err != nil {
rw.errHandler.ServeHTTP(w, req, err)
return
}
// parse the rewritten URL and replace request URL with it
parsedURL, err := url.Parse(rewrittenURL.String())
if err != nil {
rw.errHandler.ServeHTTP(w, req, err)
return
}
if rw.redirect && newURL != oldURL {
(&redirectHandler{u: parsedURL}).ServeHTTP(w, req)
return
}
req.URL = parsedURL
// make sure the request URI corresponds the rewritten URL
req.RequestURI = req.URL.RequestURI()
if !rw.rewriteBody {
rw.next.ServeHTTP(w, req)
return
}
bw := &bufferWriter{header: make(http.Header), buffer: &bytes.Buffer{}}
newBody := &bytes.Buffer{}
rw.next.ServeHTTP(bw, req)
if err := Apply(bw.buffer, newBody, req); err != nil {
log.Errorf("Failed to rewrite response body: %v", err)
return
}
utils.CopyHeaders(w.Header(), bw.Header())
w.Header().Set("Content-Length", strconv.Itoa(newBody.Len()))
w.WriteHeader(bw.code)
io.Copy(w, newBody)
}
func FromOther(rw Rewrite) (plugin.Middleware, error) {
return NewRewrite(rw.Regexp, rw.Replacement, rw.RewriteBody, rw.Redirect)
}
func FromCli(c *cli.Context) (plugin.Middleware, error) {
return NewRewrite(c.String("regexp"), c.String("replacement"), c.Bool("rewriteBody"), c.Bool("redirect"))
}
func GetSpec() *plugin.MiddlewareSpec {
return &plugin.MiddlewareSpec{
Type: Type,
FromOther: FromOther,
FromCli: FromCli,
CliFlags: CliFlags(),
}
}
func CliFlags() []cli.Flag {
return []cli.Flag{
cli.StringFlag{
Name: "regexp",
Usage: "regex to match against http request path",
},
cli.StringFlag{
Name: "replacement",
Usage: "replacement text into which regex expansions are inserted",
},
cli.BoolFlag{
Name: "rewriteBody",
Usage: "if provided, response body is treated as as template and all variables in it are replaced",
},
cli.BoolFlag{
Name: "redirect",
Usage: "if provided, request is redirected to the rewritten URL",
},
}
}
func rawURL(request *http.Request) string {
scheme := "http"
if request.TLS != nil || isXForwardedHTTPS(request) {
scheme = "https"
}
return strings.Join([]string{scheme, "://", request.Host, request.RequestURI}, "")
}
func isXForwardedHTTPS(request *http.Request) bool {
xForwardedProto := request.Header.Get("X-Forwarded-Proto")
return len(xForwardedProto) > 0 && xForwardedProto == "https"
}
type redirectHandler struct {
u *url.URL
}
func (f *redirectHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
w.Header().Set("Location", f.u.String())
w.WriteHeader(http.StatusFound)
w.Write([]byte(http.StatusText(http.StatusFound)))
}
type bufferWriter struct {
header http.Header
code int
buffer *bytes.Buffer
}
func (b *bufferWriter) Close() error {
return nil
}
func (b *bufferWriter) Header() http.Header {
return b.header
}
func (b *bufferWriter) Write(buf []byte) (int, error) {
return b.buffer.Write(buf)
}
// WriteHeader sets rw.Code.
func (b *bufferWriter) WriteHeader(code int) {
b.code = code
}

View file

@ -0,0 +1,44 @@
package rewrite
import (
"io"
"io/ioutil"
"net/http"
"text/template"
)
// data represents template data that is available to use in templates.
type data struct {
Request *http.Request
}
// Apply reads a template string from the provided reader, applies variables
// from the provided request object to it and writes the result into
// the provided writer.
//
// Template is standard Go's http://golang.org/pkg/text/template/.
func Apply(in io.Reader, out io.Writer, request *http.Request) error {
body, err := ioutil.ReadAll(in)
if err != nil {
return err
}
return ApplyString(string(body), out, request)
}
// ApplyString applies variables from the provided request object to the provided
// template string and writes the result into the provided writer.
//
// Template is standard Go's http://golang.org/pkg/text/template/.
func ApplyString(in string, out io.Writer, request *http.Request) error {
t, err := template.New("t").Parse(in)
if err != nil {
return err
}
if err = t.Execute(out, data{request}); err != nil {
return err
}
return nil
}

26
vendor/github.com/vulcand/vulcand/router/router.go generated vendored Normal file
View file

@ -0,0 +1,26 @@
package router
import "net/http"
//This interface captures all routing functionality required by vulcan.
//The routing functionality mainly comes from "github.com/vulcand/route",
type Router interface {
//Sets the not-found handler (this handler is called when no other handlers/routes in the routing library match
SetNotFound(http.Handler) error
//Gets the not-found handler that is currently in use by this router.
GetNotFound() http.Handler
//Validates whether this is an acceptable route expression
IsValid(string) bool
//Adds a new route->handler combination. The route is a string which provides the routing expression. http.Handler is called when this expression matches a request.
Handle(string, http.Handler) error
//Removes a route. The http.Handler associated with it, will be discarded.
Remove(string) error
//ServiceHTTP is the http.Handler implementation that allows callers to route their calls to sub-http.Handlers based on route matches.
ServeHTTP(http.ResponseWriter, *http.Request)
}