-
Notifications
You must be signed in to change notification settings - Fork 41.4k
Prevent rapid reset http2 DOS on API server #121120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
/triage accepted |
// http2 is an expensive protocol that is prone to abuse, | ||
// see CVE-2023-44487 and CVE-2023-39325 for an example. | ||
// Do not allow unauthenticated clients to keep these | ||
// connections open (i.e. basically degrade them to http1). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
closing a connection isn't degrading to http1, it's turning off keep-alive, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this rejects, not downgrades, I don't think is going to work as expected, so clients will need to know the reasons why the connection is rejected and then retry with http1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
package main
import (
"io/ioutil"
"log"
"net/http"
"net/http/httptest"
"time"
)
func main() {
backend := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log.Printf("backend: revc req %s %s %s", r.RequestURI, r.RemoteAddr, r.Proto)
if r.ProtoMajor == 2 {
w.Header().Set("Connection", "close")
}
}))
backend.EnableHTTP2 = true
backend.StartTLS()
defer backend.Close()
log.Println("backend url", backend.URL)
hc := backend.Client()
for i := 0; i < 100; i++ {
log.Printf("client: send req %d", i)
resp, err := hc.Get(backend.URL)
if err != nil {
log.Printf("Request Failed: %s", err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Printf("Reading body failed: %s", err)
}
// Log the request body
bodyString := string(body)
log.Printf("bod received %s", bodyString)
time.Sleep(1 * time.Second)
}
}
go run main.go
2023/10/10 20:16:32 backend url https://127.0.0.1:41159
2023/10/10 20:16:32 client: send req 0
2023/10/10 20:16:32 backend: revc req / 127.0.0.1:36172 HTTP/2.0
2023/10/10 20:16:32 bod received
2023/10/10 20:16:33 client: send req 1
2023/10/10 20:16:33 backend: revc req / 127.0.0.1:45368 HTTP/2.0
2023/10/10 20:16:33 bod received
2023/10/10 20:16:34 client: send req 2
2023/10/10 20:16:34 backend: revc req / 127.0.0.1:45372 HTTP/2.0
2023/10/10 20:16:34 bod received
2023/10/10 20:16:35 client: send req 3
2023/10/10 20:16:35 backend: revc req / 127.0.0.1:45374 HTTP/2.0
2023/10/10 20:16:35 bod received
2023/10/10 20:16:36 client: send req 4
2023/10/10 20:16:36 backend: revc req / 127.0.0.1:45380 HTTP/2.0
2023/10/10 20:16:36 bod received
2023/10/10 20:16:37 client: send req 5
2023/10/10 20:16:37 backend: revc req / 127.0.0.1:45386 HTTP/2.0
2023/10/10 20:16:37 bod received
2023/10/10 20:16:38 client: send req 6
2023/10/10 20:16:38 backend: revc req / 127.0.0.1:45398 HTTP/2.0
2023/10/10 20:16:38 bod received
2023/10/10 20:16:39 client: send req 7
2023/10/10 20:16:39 backend: revc req / 127.0.0.1:45410 HTTP/2.0
2023/10/10 20:16:39 bod received
2023/10/10 20:16:40 client: send req 8
2023/10/10 20:16:40 backend: revc req / 127.0.0.1:45426 HTTP/2.0
2023/10/10 20:16:40 bod received
2023/10/10 20:16:41 client: send req 9
2023/10/10 20:16:41 backend: revc req / 127.0.0.1:45442 HTTP/2.0
2023/10/10 20:16:41 bod received
2023/10/10 20:16:42 client: send req 10
2023/10/10 20:16:42 backend: revc req / 127.0.0.1:45450 HTTP/2.0
2023/10/10 20:16:42 bod received
2023/10/10 20:16:43 client: send req 11
2023/10/10 20:16:43 backend: revc req / 127.0.0.1:43390 HTTP/2.0
2023/10/10 20:16:43 bod received
2023/10/10 20:16:44 client: send req 12
2023/10/10 20:16:44 backend: revc req / 127.0.0.1:43392 HTTP/2.0
2023/10/10 20:16:44 bod received
2023/10/10 20:16:45 client: send req 13
2023/10/10 20:16:45 backend: revc req / 127.0.0.1:43398 HTTP/2.0
2023/10/10 20:16:45 bod received
2023/10/10 20:16:46 client: send req 14
2023/10/10 20:16:46 backend: revc req / 127.0.0.1:43406 HTTP/2.0
2023/10/10 20:16:46 bod received
2023/10/10 20:16:47 client: send req 15
2023/10/10 20:16:47 backend: revc req / 127.0.0.1:43414 HTTP/2.0
2023/10/10 20:16:47 bod received
2023/10/10 20:16:48 client: send req 16
2023/10/10 20:16:48 backend: revc req / 127.0.0.1:43424 HTTP/2.0
2023/10/10 20:16:48 bod received
2023/10/10 20:16:49 client: send req 17
2023/10/10 20:16:49 backend: revc req / 127.0.0.1:43438 HTTP/2.0
2023/10/10 20:16:49 bod received
2023/10/10 20:16:50 client: send req 18
2023/10/10 20:16:50 backend: revc req / 127.0.0.1:43454 HTTP/2.0
2023/10/10 20:16:50 bod received
2023/10/10 20:16:51 client: send req 19
2023/10/10 20:16:51 backend: revc req / 127.0.0.1:43464 HTTP/2.0
2023/10/10 20:16:51 bod received
2023/10/10 20:16:52 client: send req 20
2023/10/10 20:16:52 backend: revc req / 127.0.0.1:43472 HTTP/2.0
2023/10/10 20:16:52 bod received
2023/10/10 20:16:53 client: send req 21
2023/10/10 20:16:53 backend: revc req / 127.0.0.1:46854 HTTP/2.0
2023/10/10 20:16:53 bod received
2023/10/10 20:16:54 client: send req 22
2023/10/10 20:16:54 backend: revc req / 127.0.0.1:46860 HTTP/2.0
2023/10/10 20:16:54 bod received
2023/10/10 20:16:55 client: send req 23
2023/10/10 20:16:55 backend: revc req / 127.0.0.1:46872 HTTP/2.0
2023/10/10 20:16:55 bod received
2023/10/10 20:16:56 client: send req 24
2023/10/10 20:16:56 backend: revc req / 127.0.0.1:46880 HTTP/2.0
2023/10/10 20:16:56 bod received
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's turning off keep-alive, right?
ah, now I understand, yes, the connection will not be reused , so it has to establish a new connection the next time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, a better comment here would help (e.g. "limit this connection to just this request, and then close the connection")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only 36
talked offline, @enj will add unit tests and wrap the connection close behavior in a feature gate so backports can opt out if needed |
} | ||
} | ||
|
||
type timeoutWriter interface { | ||
http.ResponseWriter | ||
timeout(*apierrors.StatusError) | ||
timeout(*http.Request, *apierrors.StatusError) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we had a race in this handler, 44705c7 , we need to check plumbing the request is ok or if we should use the clone rCopy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or just pull out and plumb the exact bits we need (context and protomajor) before the timeout handler splits to a goroutine
|
||
deadline, ok := r.Context().Deadline() | ||
if !ok { | ||
return true // this context had no deadline but was canceled meaning the client likely reset it early |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are you looking for a heuristic to know if the handler was canceled from the client with a RST_FRAME?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neild do you have any suggestion or know if it is even possible to detect the connection was closed by an RST_FRAME?
This change fully addresses CVE-2023-44487 and CVE-2023-39325 for
the API server when the client is unauthenticated.
The changes to util/runtime are required because otherwise a large
number of requests can get blocked on the time.Sleep calls.
For unauthenticated clients (either via 401 or the anonymous user),
we simply no longer allow such clients to hold open http2
connections. They can use http2, but with the performance of http1
(with keep-alive disabled).
Since this change has the potential to cause issues, the
UnauthenticatedHTTP2DOSMitigation feature gate can be disabled to
remove this protection (it is enabled by default). For example,
when the API server is fronted by an L7 load balancer that is set up
to mitigate http2 attacks, unauthenticated clients could force
disable connection reuse between the load balancer and the API
server (many incoming connections could share the same backend
connection). An API server that is on a private network may opt to
disable this protection to prevent performance regressions for
unauthenticated clients.
For all other clients, we rely on the golang.org/x/net fix in
golang/net@b225e7c
That change is not sufficient to adequately protect against a
motivated client - future changes to Kube and/or golang.org/x/net
will be explored to address this gap.
The Kube API server now uses a max stream of 100 instead of 250
(this matches the Go http2 client default). This lowers the abuse
limit from 1000 to 400.
Signed-off-by: Monis Khan [email protected]
/kind bug