In my previous post I explained some of the basics of Logbox, including the use of appenders. An appender is just a component which takes care of sending your log messages to some message repository, such as a file, console, socket, email, database and so on. Logbox is very handy because it has a standard format to send your log messages and optionally extra info. But sometimes you want to send extra information and show it in a nice format in your logs. Logbox has two ways to modify the output: Layout components and custom appenders.
In most cases Logbox is sending the following information

  • Message
  • ExtraInfo (any data)
  • Severity (from FATAL to DEBUG)
  • Category
  • Logdate
  • Appendername

Not all information is written to the target repository, this is depending on the appender. Some targets have their own logging dates, or they are missing some of the severities (like DEBUG). The DummyAppender is not even writing anything at all! So your appender determines what gets written and in what format.

Sometimes we want more. Most of my applications need authentication, so I would like to see the username in my logs. Other applications are multi-tenant, so I would like to see the tenant code (and the username) in my log. So how can I make sure my tenant code and username will be added to the LogEvent? And how can I write both properties to my log repository?

Adding extra info: use the extrainfo parameter in your message call

If you read this option: adding extra info to an extraInfo parameter should make sense :-). And indeed, most appenders will write your extrainfo in some format to your message repository. So myLogger.warn( message="Some warning", extrainfo={ username="its me!"} ) will do the trick. But it has two disadvantages:

  1. You have to add your username to the extrainfo data in every logging call. That can be error prone and annoying if you do a lot of logging. You could circumvent this problem by creating your own logging service as a wrapper. Your own service call just logs the message and optionally an extrainfo parameter. You could make your logging a bit smarter by creating your own service which does the logging and appends username and/or tenant code on every call to your extrainfo data.
  2. The second problem: although your extra properties are registered now, they are hidden in some nested datastructure, such as json or XML. Although you can search through these fields it is not very fast. It would be a lot easier if these username and tenant were in individual fields. So we need a better appender for that.

Improving your logformat: Layout component or Custom appender

A layout component is actually quite simple. It is a format function which returns a string with the desired format. The FileAppender can take a layout component. If you want to use it you should define your appender like this:

  appenders = {
    debugLog =  {
      class="FileAppender",
      properties = {
        filePath = "../logs",
        fileName = "debug",
        layout   = "path.to.LogLayoutComponent",
//..
      },
    },
//...

So for fileAppenders this is quite simple. For EmailAppenders there is some extra undocumented feature: You can create a getTitle method in your Layout component so you also have control over this. If you search for hasCustomLayout() in your coldbox.system.logging.appenders map you will see how it is used. I’ve always wondered if anyone is using the Layout component. In my opinion it is quite limited because you can only format your logEvent as a string. This means it is totally impossible for a database appender, so we need more control, so that means: a CUSTOM appender

Anatomy of an appender

Before I will create our custom appender, let’s discuss what’s going on in an appender. The manual has some useful info on custom appenders, but it is a bit outdated since coldbox version 6 added async logging which is non-blocking and makes your application perform better. Each appender inherits from a coldbox.system.logging.AbstractAppender has at least the following methods:

  • init() which can read min and max logging levels, and a lot of additional properties to define the target (like filenames or datasource and table, retention time and more)
  • onRegistration() and
  • onUnRegistration: useful for preparation and cleanup of the log target
  • logMessage() is obviously the most important method. Here you can define which properties are written to your log target. For many of the log appenders the implementation has changed in version 6: the ConsoleAppender, FileAppender, RollingFileAppender and DBAppender are all async now. This means the logMessage method is just writing to a queue with queueMessage( entry ). You don’t have to modify the queue message method to get async goodness, but you have to implement an extra method:
  • processQueueElement(required data, required queueContext ): so instead of directly writing to your target in the logMessage method you send each queued LogEvent to the logdestination now in your processQueueElement method, for example a fileappend or database insert.

Requirements for our ExtendedDBAppender

We have a multitenant application where each tenant has its own datasource, but we are logging everything for all tenants to a shared loggingDatasource . Additionally we want to log the authenticated user for every LogEvent. Both the tenant and username should be registered in separate fields, so we can easily place filters on the logs for every tenant or user. We have lots of categories and many logging calls, so we don’t want to add username and tenant code manually to every log message. In this application we want the username and tenantcode to be globally available in the private request collection, so we can access these values easily.

So our first job was to create a preProcess interceptor. Preprocess fires quite early so we can do something like this

void function preProcess( event, data, buffer, rc, prc ){
  // user and tenant are in some persistent service
  prc.currentUserName = someService.getUserName();
  prc.tenantCode = myTenantservice.getTenantCode();

So we registered the interceptor and we are done. If the user is not there our getUserName()returns ‘anonymous`. We choose username but you could also write UserID, CustomerId or any other value you want to the prc.

Creating the appender

Now for the appender. We need a database appender, which creates the extra fields ‘username’ and ‘tenantcode’ and adds the values to the database. We started by extending from the standard DB appender. If you look at the DBAppender you will see it has a lot of extra methods for cleaning old entries, creating the database and so on It also makes sense to have a log at the AbstractAppender code, so you can see how some methods are used. But actually we only needed to change DEFAULT_COLUMNS and the methods logmessage, processQueueElement and ensureTable.

component accessors="true" extends="coldbox.system.logging.appenders.DBAppender" {
  // Default column names
  // BEWARE: order is important for ensureTable method
	variables.DEFAULT_COLUMNS = [
		"id",
		"severity",
		"category",
		"logdate",
		"appendername",
		"message",
		"extrainfo",
		"tenantcode",
		"username"
	];

The logmessage method method is pretty easy. We just add the tenantcode and username to the call, and didn’t change anything else. The only thing which was not immediately obvious from the DBAppender or other appenders: how to get a reference to the private request collections. I couldn’t inject a requestService at Logbox configuration time, since wirebox is not ready yet. But after some investigation I found the AbstractAppender has a reference to coldbox, so I could get my prc by asking for variables.coldbox.getRequestService().getContext().getPrivateValue()

function logMessage( required coldbox.system.logging.LogEvent logEvent ){
  var category = getProperty( "defaultCategory" );
  // Check Category Sent?
  if ( NOT arguments.logEvent.getCategory() eq "" ) {
    category = arguments.logEvent.getCategory();
  }
  queueMessage( {
    severity  : severityToString( arguments.logEvent.getseverity() ),
    category  : left( category, 100 ),
	timestamp : arguments.logEvent.getTimestamp(),
	message   : arguments.logEvent.getMessage(),
	extraInfo: arguments.logEvent.getExtraInfoAsString(),
	tenantcode : 
      variables.coldbox.getRequestService()
        .getContext()
        .getPrivateValue("tenantCode","unknown"),
	username     : 
      variables.coldbox.getRequestService()
        .getContext()
        .getPrivateValue("currentUserName","unknown"),
	} );
  return this;
}

So now we can queue our message. We process the queue in the processQueueElement method. And again, it is just adding some extra values to some additional columns.

function processQueueElement( required data, required queueContext ){
  var cols = getColumnNames();
  // Insert into table
  queryExecute(
	"INSERT INTO #getTable()# (#cols#)
		VALUES (
			:uuid,
			:severity,
			:category,
			:timestamp,
			:name,
			:message,
			:extraInfo,
			:tenantcode,
			:username
		)
	",
	{
	  uuid : {
		cfsqltype : "cf_sql_varchar",
		value     : "#variables.uuid.randomUUID().toString()#"
 	  },
	  severity : {
		cfsqltype : "cf_sql_varchar",
		value     : "#arguments.data.severity#"
	  },
	  category : {
	    cfsqltype : "cf_sql_varchar",
		value     : "#arguments.data.category#"
	  },
	  timestamp : {
		cfsqltype : "cf_sql_timestamp",
		value     : "#arguments.data.timestamp#"
	  },
	  name : {
		cfsqltype : "cf_sql_varchar",
		value     : "#left( getName(), 100 )#"
	  },
	  message : {
		cfsqltype : "cf_sql_varchar",
		value     : "#arguments.data.message#"
	  },
	  extraInfo : {
		cfsqltype : "cf_sql_varchar",
		value     : "#arguments.data.extraInfo#"
	  },
	  tenantcode : {
		cfsqltype : "cf_sql_varchar",
		value     : "#arguments.data.tenantcode#"
	  },
	  username : {
		cfsqltype : "cf_sql_varchar",
		value     : "#arguments.data.username#"
	  },
	},
	{ datasource : getProperty( "dsn" ) }
  );
		return this;
}

So this makes sure your data will be inserted in the database. The async code from the AbstractAppender makes sure your application can continue before the insert is finished. There is only one thing missing, and only if you are not creating this logging table manually. The database appender has a EnsureTable method which makes sure the table will be created if your database table is not there yet.

private function ensureTable(){
  var dsn        = getProperty( "dsn" );
  var qTables    = 0;
  var tableFound = false;
  var qCreate    = "";
  var cols       = getColumnNames();
  if ( getProperty( "autoCreate" ) ) {
	// Get Tables on this DSN
	cfdbinfo(
      datasource = "#dsn#",
	  name       = "qTables",
	  type       = "tables"
	);
	for ( var thisRecord in qTables ) {
	  if ( thisRecord.table_name == getProperty( "table" ) ) {
	    tableFound = true;
	    break;
	  }
	}
	if ( NOT tableFound ) {
	  queryExecute(
		"CREATE TABLE #getTable()# (
	  	  #listGetAt( cols, 1 )# VARCHAR(36) NOT NULL,
		  #listGetAt( cols, 2 )# VARCHAR(10) NOT NULL,
		  #listGetAt( cols, 3 )# VARCHAR(100) NOT NULL,
		  #listGetAt( cols, 4 )# #getDateTimeColumnType()# NOT NULL,
		  #listGetAt(cols, 5 )# VARCHAR(100) NOT NULL,
		  #listGetAt( cols, 6 )# #getTextColumnType()#,
		  #listGetAt( cols, 7 )# #getTextColumnType()#,
		  #listGetAt( cols, 8 )# VARCHAR(50) NOT NULL,
		  #listGetAt( cols, 9 )# VARCHAR(50) NOT NULL,
		  PRIMARY KEY ( #listGetAt( cols, 1 )# )
		)",
		{},
		{ datasource : getProperty( "dsn" ) }
	  );
	}
  }
}

As you can see the EnsureTable method uses some utility methods to determine which datatype to create in your database table. Column 8 and 9 are the newly added columns and here it is where order is important. Just add your new columns last at initialization, or you have to do some more rewriting here!

So with our new username and tenantcode columns we have plenty of possibilities to filter and display our log entries. It might be useful to add some indexes (for example to query on logdate) if you plan on logging a lot. The default db appender has some nice options to rotate data after some days, and our extended dbappender inherits this behaviour.

In the next posts I will show you how to create an audit log for all database changes by using my extended db appender and some interceptors and how to display these changes in a very simple admin interface.

Happy logging!