5 min read

Production logging in Flask (Python)

Mmm, logs.
Want all this beautiful custom information in your production logs? Read on...

Production logging on Flask can be a tricky beast. If you love logging and you’re not a lumberjack, this post might help you out!

I use Logentries to handle and view my logs, so this guide has a slant towards that, though the learnings here are what whipped my production logging into shape in general, so it should be useful to everyone.

While the good people at Logentries provide a sweet python library for their logging platform, I found that getting their logger working cleanly with the inbuilt Flask production logging setup (rather than taking it out of the loop) was not quite as simple as their instructions made out for a relative Python novice like myself.

For the benefit of anyone else struggling, I thought I’d share my code. This is straight from the codebase of my new side project GroupVite, incidentally also my first Python project. A good project is the best way to learn any language!

Here is the complete code, and I’ll talk you through each stage below.

import logging
from logentries import LogentriesHandler
...

# - Filter/Formatter
class ContextualFilter(logging.Filter):
    def filter(self, log_record):
    """ Provide some extra variables to give our logs some
        better info """
        log_record.utcnow = (
            datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S,%f %Z')
        )
        log_record.url = request.path
        log_record.method = request.method
        # Try to get the IP address of the user through reverse proxy
        log_record.ip = request.environ.get(
                            'HTTP_X_REAL_IP', 
                            request.remote_addr
        )
        if current_user.is_anonymous():
            log_record.user_id = 'guest'
        else:
            log_record.user_id = current_user.get_id()

        return True

# Use said info
log_format = (
    "%(utcnow)s\tl=%(levelname)s\tu=%(user_id)s\tip=%(ip)s"
    "\tm=%(method)s\turl=%(url)s\tmsg=%(message)s"
)
formatter = logging.Formatter(log_format)

# - Handlers
# -- Stream handler
streamHandler = logging.StreamHandler()
streamHandler.setLevel(logging.INFO)
streamHandler.setFormatter(formatter)

# -- LogEntries handler
leHandler = LogentriesHandler(app.config['LOGENTRIES_TOKEN'])
leHandler.setLevel(logging.INFO)
leHandler.setFormatter(formatter)

# - Logger
log = app.logger
log.setLevel(logging.DEBUG)
log.addFilter(ContextualFilter())
log.addHandler(streamHandler)
log.addHandler(leHandler)

Alrighty, so what’s going on?

import logging
from logentries import LogentriesHandler
...

# - Filter/Formatter
class ContextualFilter(logging.Filter):
    def filter(self, log_record):
    """ Provide some extra variables to give our logs some
        better info """
        log_record.utcnow = (
            datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S,%f %Z')
        )
        log_record.url = request.path
        log_record.method = request.method
        # Try to get the IP address of the user through reverse proxy
        log_record.ip = request.environ.get(
                            'HTTP_X_REAL_IP',
                            request.remote_addr
                        )

        if current_user.is_anonymous():
            log_record.user_id = 'guest'
        else:
            log_record.user_id = current_user.get_id()

        return True

# Use said info
log_format = (
    "%(utcnow)s\tl=%(levelname)s\tu=%(user_id)s\tip=%(ip)s"
    "\tm=%(method)s\turl=%(url)s\tmsg=%(message)s"
)
formatter = logging.Formatter(log_format)

In this section of the code we import the necessary libraries (be sure to pip install them to your virtualenv first) and establish some extra information to use with our log messages. This is by no means essential, but by setting up a custom filter like the ContextualFilter above we can make our logs more useful and readable. I must note that this section of the code mostly comes from somewhere else, Real Python I think, though I can’t find the exact blog post. Regardless, it’s a great resource for learning Python!

The log_record object holds the extra variables for us, for use in the formatter at the bottom of this block. There is no need to preface each variable in log_format with log_record, it’s assumed.

Note that a lot of people choose to use asctime (server time) instead of my custom utcnow, but I like to keep everything in UTC where possible. Logentries will automatically pre-pend a timestamp to your logs when you view them online, but if you’re like me you don’t want to lose that information on export, or have it unavailable if you choose to stream to a file down the line!

# - Handlers
# -- Stream handler
streamHandler = logging.StreamHandler()
streamHandler.setLevel(logging.INFO)
streamHandler.setFormatter(formatter)

This next part is relevant for those of you who want to send your log messages to a stream in console also (useful for local debugging). You’ll likely have seen examples like this before. We set up our streamHandler and set its logging level (this means that it will only output logs of severity INFO or above – this stops it duplicating the normal debug logs you’ll see when running with app.debug == True as you likely do in your development environment — see here for more information on Python’s logging levels).

# -- LogEntries handler
leHandler = LogentriesHandler(app.config['LOGENTRIES_TOKEN'])
leHandler.setLevel(logging.INFO)
leHandler.setFormatter(formatter)

Here’s the Logentries-specific stuff. We instantiate a LogentriesHandler using a key passed in from app.config. I highly recommend having at least two config files – one for development and one for production – and having separate Logentries tokens for each environment. This will avoid you cluttering your important production logs with development lines.

Next we set its level, same as the streamHandler, and the formatter so our logs will be nice and pretty when they get up to the server.

# - Logger
log = app.logger
log.setLevel(logging.DEBUG)
log.addFilter(ContextualFilter())
log.addHandler(streamHandler)
log.addHandler(leHandler)

Here is where the real magic happens. We set app.logger, Flask’s built-in logging mechanism to a nice short log variable, meaning that we can now log with clean code like log.info('This guide is super useful :D') and log.warn('The last log message may be a lie, like the cake') rather than having to preface each logging call with app.logger....

The next line is the most important one in this article. Without setting the level of the logger to logging.DEBUG, we will only see exceptions logged to Logentries from our production server (running with app.debug == False). This one line eluded me for long enough that I was tearing my hair out by the time I finally got to see sweet, sweet info logs show up on my production setup. Of course, if all you want is exceptions logged to Logentries then go ahead and set this to INFO, but I feel you’re missing out on the power of logs as, well, a great log of the actions users take that you feel are important.

Next, we prettify the log with our ContextualFilter() as we did the handlers. This is necessary to avoid errors pertaining to the custom variables our handlers use – for some reason setting this at handler level is not quite enough.

Finally, we add both streams to log, making it duplicate each log message both to the console and to Logentries. By adding a third handler, a FileHandler you could also duplicate the logs to a local file if you’d like permanency in your logging setup.

Now, all our logs will be sent to Logentries as we send them. You can trace the path to error conditions with far more resolution and really see what your users are up to. Combine this with the awesome power of grep and you will have a mean analytics setup into the bargain.

tl;dr – Make sure you set the level of your logger to logging.DEBUG in order to ensure that all the logs (not just exceptions) get sent to Logentries if nothing else is working for you.

Enjoy, and many happy plaintext returns!