wsgi – WSGI server

The wsgi module provides a simple and easy way to start an event-driven WSGI server. This can serve as an embedded web server in an application, or as the basis for a more full-featured web server package. One such package is Spawning.

To launch a wsgi server, simply create a socket and call eventlet.wsgi.server() with it:

from eventlet import wsgi
import eventlet

def hello_world(env, start_response):
    start_response('200 OK', [('Content-Type', 'text/plain')])
    return ['Hello, World!\r\n']

wsgi.server(eventlet.listen(('', 8090)), hello_world)

You can find a slightly more elaborate version of this code in the file examples/wsgi.py.

eventlet.wsgi.format_date_time(timestamp)

Formats a unix timestamp into an HTTP standard string.

eventlet.wsgi.server(sock, site, log=None, environ=None, max_size=None, max_http_version='HTTP/1.1', protocol=<class 'eventlet.wsgi.HttpProtocol'>, server_event=None, minimum_chunk_size=None, log_x_forwarded_for=True, custom_pool=None, keepalive=True, log_output=True, log_format='%(client_ip)s - - [%(date_time)s] "%(request_line)s" %(status_code)s %(body_length)s %(wall_seconds).6f', url_length_limit=8192, debug=True, socket_timeout=None, capitalize_response_headers=True)

Start up a WSGI server handling requests from the supplied server socket. This function loops forever. The sock object will be closed after server exits, but the underlying file descriptor will remain open, so if you have a dup() of sock, it will remain usable.

Warning

At the moment server() will always wait for active connections to finish before exiting, even if there’s an exception raised inside it (all exceptions are handled the same way, including greenlet.GreenletExit and those inheriting from BaseException).

While this may not be an issue normally, when it comes to long running HTTP connections (like eventlet.websocket) it will become problematic and calling wait() on a thread that runs the server may hang, even after using kill(), as long as there are active connections.

Parameters:
  • sock – Server socket, must be already bound to a port and listening.

  • site – WSGI application function.

  • log – logging.Logger instance or file-like object that logs should be written to. If a Logger instance is supplied, messages are sent to the INFO log level. If not specified, sys.stderr is used.

  • environ – Additional parameters that go into the environ dictionary of every request.

  • max_size – Maximum number of client connections opened at any time by this server. Default is 1024.

  • max_http_version – Set to “HTTP/1.0” to make the server pretend it only supports HTTP 1.0. This can help with applications or clients that don’t behave properly using HTTP 1.1.

  • protocol – Protocol class. Deprecated.

  • server_event – Used to collect the Server object. Deprecated.

  • minimum_chunk_size – Minimum size in bytes for http chunks. This can be used to improve performance of applications which yield many small strings, though using it technically violates the WSGI spec. This can be overridden on a per request basis by setting environ[‘eventlet.minimum_write_chunk_size’].

  • log_x_forwarded_for – If True (the default), logs the contents of the x-forwarded-for header in addition to the actual client ip address in the ‘client_ip’ field of the log line.

  • custom_pool – A custom GreenPool instance which is used to spawn client green threads. If this is supplied, max_size is ignored.

  • keepalive – If set to False or zero, disables keepalives on the server; all connections will be closed after serving one request. If numeric, it will be the timeout used when reading the next request.

  • log_output – A Boolean indicating if the server will log data or not.

  • log_format – A python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. The default is a good example of how to use it.

  • url_length_limit – A maximum allowed length of the request url. If exceeded, 414 error is returned.

  • debug – True if the server should send exception tracebacks to the clients on 500 errors. If False, the server will respond with empty bodies.

  • socket_timeout – Timeout for client connections’ socket operations. Default None means wait forever.

  • capitalize_response_headers – Normalize response headers’ names to Foo-Bar. Default is True.

SSL

Creating a secure server is only slightly more involved than the base example. All that’s needed is to pass an SSL-wrapped socket to the server() method:

wsgi.server(eventlet.wrap_ssl(eventlet.listen(('', 8090)),
                              certfile='cert.crt',
                              keyfile='private.key',
                              server_side=True),
            hello_world)

Applications can detect whether they are inside a secure server by the value of the env['wsgi.url_scheme'] environment variable.

Non-Standard Extension to Support Post Hooks

Eventlet’s WSGI server supports a non-standard extension to the WSGI specification where env['eventlet.posthooks'] contains an array of post hooks that will be called after fully sending a response. Each post hook is a tuple of (func, args, kwargs) and the func will be called with the WSGI environment dictionary, followed by the args and then the kwargs in the post hook.

For example:

from eventlet import wsgi
import eventlet

def hook(env, arg1, arg2, kwarg3=None, kwarg4=None):
    print('Hook called: %s %s %s %s %s' % (env, arg1, arg2, kwarg3, kwarg4))

def hello_world(env, start_response):
    env['eventlet.posthooks'].append(
        (hook, ('arg1', 'arg2'), {'kwarg3': 3, 'kwarg4': 4}))
    start_response('200 OK', [('Content-Type', 'text/plain')])
    return ['Hello, World!\r\n']

wsgi.server(eventlet.listen(('', 8090)), hello_world)

The above code will print the WSGI environment and the other passed function arguments for every request processed.

Post hooks are useful when code needs to be executed after a response has been fully sent to the client (or when the client disconnects early). One example is for more accurate logging of bandwidth used, as client disconnects use less bandwidth than the actual Content-Length.

“100 Continue” Response Headers

Eventlet’s WSGI server supports sending (optional) headers with HTTP “100 Continue” provisional responses. This is useful in such cases where a WSGI server expects to complete a PUT request as a single HTTP request/response pair, and also wants to communicate back to client as part of the same HTTP transaction. An example is where the HTTP server wants to pass hints back to the client about characteristics of data payload it can accept. As an example, an HTTP server may pass a hint in a header the accompanying “100 Continue” response to the client indicating it can or cannot accept encrypted data payloads, and thus client can make the encrypted vs unencrypted decision before starting to send the data).

This works well for WSGI servers as the WSGI specification mandates HTTP expect/continue mechanism (PEP333).

To define the “100 Continue” response headers, one may call set_hundred_continue_response_header() on env['wsgi.input'] as shown in the following example:

from eventlet import wsgi
import eventlet

def wsgi_app(env, start_response):
    # Define "100 Continue" response headers
    env['wsgi.input'].set_hundred_continue_response_headers(
        [('Hundred-Continue-Header-1', 'H1'),
         ('Hundred-Continue-Header-k', 'Hk')])
    # The following read() causes "100 Continue" response to
    # the client.  Headers 'Hundred-Continue-Header-1' and
    # 'Hundred-Continue-Header-K' are sent with the response
    # following the "HTTP/1.1 100 Continue\r\n" status line
    text = env['wsgi.input'].read()
    start_response('200 OK', [('Content-Length', str(len(text)))])
    return [text]

You can find a more elaborate example in the file: tests/wsgi_test.py, test_024a_expect_100_continue_with_headers().

Per HTTP RFC 7231 (http://tools.ietf.org/html/rfc7231#section-6.2) a client is required to be able to process one or more 100 continue responses. A sample use case might be a user protocol where the server may want to use a 100-continue response to indicate to a client that it is working on a request and the client should not timeout.

To support multiple 100-continue responses, evenlet wsgi module exports the API send_hundred_continue_response().

Sample use cases for chunked and non-chunked HTTP scenarios are included in the wsgi test case tests/wsgi_test.py, test_024b_expect_100_continue_with_headers_multiple_chunked() and test_024c_expect_100_continue_with_headers_multiple_nonchunked().