jamie on software.

and other things

Using Eloquent ORM inside CodeIgniter (with added query logging)

Ever the diplomat, I’ve been doing my best to use components of the wonderful Laravel in CodeIgniter apps that I’ve been working on for clients. Whilst I’ve switched almost completely to using Laravel as of late, I still work on legacy CI apps (and still work with clients who use CI for new apps, for that matter.) At any rate, Laravel is designed in a modular fashion, allowing me to plug my favourite components into CodeIgniter and use them without too much friction.

The most important of these components is, of course, the wonderful Eloquent ORM, which has provided me with as much joy as ActiveRecord did in the land of Ruby. Thankfully, getting Eloquent running inside CodeIgniter was easy. I’ll assume you have Composer installed – if you don’t, you really should. You’ll need to add Laravel’s database component to your composer.json (which includes Eloquent):

    "require": {
        "illuminate/database": "4.2.6"

I also use Composer’s autoloader as a broad replacement for CodeIgniter’s CI_Loader, so I’ve added in some folders into my classmap. You don’t need to do this, so don’t worry too much. I use application/classes as a dump-spot for random objects I need to use before I find a more suitable place for them, so don’t worry too much about that, either:

    "autoload": {
        "classmap": [

Install the dependencies and generate the autoload files:

$ composer install

You’ll also need to require the autoloader in order to access anything – I do this right at the top of my index.php file:

require_once '../vendor/autoload.php'; 

I keep the Laravel database component setup code inside my config/database.php config file. Not a particularly good idea, I know, so feel free to move it somewhere more appropriate (perhaps a hook? or potentially a library?). We need to call on the autoloader to give us access to the database component:

use Illuminate\Database\Capsule\Manager as Capsule;

We then create a new instance of the component and add a new connection:

$capsule = new Capsule;

    'driver'    => 'mysql',
    'host'      => $db['default']['hostname'],
    'database'  => $db['default']['database'],
    'username'  => $db['default']['username'],
    'password'  => $db['default']['password'],
    'charset'   => 'utf8',
    'collation' => 'utf8_unicode_ci',
    'prefix'    => $db['default']['dbprefix'],

We want this connection to be used as Eloquent’s connection, so we should set it globally. While we’re there, let’s boot Eloquent as well:


We can now use Eloquent by using the namespace in our model files and extending our models from the class. After that, everything works just as it does in Laravel:

use \Illuminate\Database\Eloquent\Model as Eloquent;

class User extends Eloquent { }

$users = User::all();

One thing that irked me about this setup was the fact that my database queries didn’t show up in my profiler. The CI profiler is an old but trusty tool for debugging and figuring out why pages are loading slowly, or how best to optimise your application and reduce duplicated queries. Back in our config/database.php, under the call to bootEloquent(), we will grab an instance of Laravel’s event dispatcher:

$events = new Illuminate\Events\Dispatcher;

We can now listen to the illuminate.query event, which is triggered every time a query is run. Inside the callback, we want to grab the query text and insert the bindings into the text for display (for more information about bindings, read here):

$events->listen('illuminate.query', function($query, $bindings, $time, $name)
    // Format binding data for sql insertion
    foreach ($bindings as $i => $binding)
        if ($binding instanceof \DateTime)
            $bindings[$i] = $binding->format('\'Y-m-d H:i:s\'');
        else if (is_string($binding))
            $bindings[$i] = "'$binding'";

    // Insert bindings into query
    $query = str_replace(array('%', '?'), array('%%', '%s'), $query);
    $query = vsprintf($query, $bindings); 

Now we’ve got a displayable query (and, incidentally, a runnable query, which is useful for debugging), we can add it to the profiler. The profiler looks to the CI database library object for the query (and its associated running time). Since CI_DB_driver uses the var keyword, we can access it publicly and shove the query onto the end. CodeIgniter’s shitty design may not be particularly robust, but damn it is flexible:

    // Add it into CodeIgniter
    $db =& get_instance()->db;

    $db->query_times[] = $time;
    $db->queries[] = $query;

Now we have our listener, we just need to tell the DB component to notify our dispatcher of any events that may occur:


And violà, we have a query log.

Eager-loading Eloquent polymorphic associations

Laravel’s Eloquent doesn’t allow for eager loading of polymorphic relationships. While this is a very niche issue, it has come up more than once now and I know that I’ll see it again. Rather than just put up with it and face a lot of redundant queries, I decided to write a small class to handle the fetching of polymorphic associations.

The class works like this: accepting an array of models and the name of the association, pull out all the related types and their IDs and then fetch each related object. After we’ve got the related objects, it’s just a matter of setting them as related objects, as if Eloquent has populated them itself.

The code is simple:

class AssocFetcher
    public function __construct($assoc)
        $this->assoc = $assoc;

    public function fetch($models)
        $relationships = array();
        $objects = array();

        $typeCol = "{$this->assoc}_type";
        $idCol = "{$this->assoc}_type";

        foreach ($models as $model)
            $relationships[$model->$typeCol][] = $model->$idCol;

        foreach ($relationships as $class => $ids)
            $objects[$class] = $class::findMany($ids);

        foreach ($models as $model)
            $model->setRelation($this->assoc, $objects[$model->$typeCol]->find($model->$idCol));

        return $models;

I can then use it in my controller like so:

$assocs = new AssocFetcher('subject');
$changes = $assocs->fetch(Change::all());

With the above code, each of my Change objects will be populated with a subject relationship, based on the polymorphic subject_type and subject_id columns. This way, I can use my polymorphic related object with ease:

foreach ($changes as $change)
    echo $change->subject->name;

With this class, I’ve gone down from O(1 + n) efficiency (1 query to get the changes, n queries to get each change’s subject) to O(1 + a), where a is the number of different types of associated object. It can be the difference between 4 queries and 400, so it’s well worth investigating ways of reducing the query count on the page by being a bit cleverer and a bit more forward-thinking.

Cleverer queries with MySQL’s IF()

I’ve been building a simple dashboard for a training company. On the dashboard, I need to retrieve some basic statistics - a few COUNTs - from a database table. This isn’t anything fancy, just a basic JOIN and the retrieval of a few columns.

The basic query looks something like this:

SELECT COUNT(quiz_results.id) AS modules_taken, quiz_results.*
FROM quiz_results
JOIN modules ON quiz_results.module_id = modules.id
WHERE modules.client_id = 123
  AND quiz_results.client_location_id = 1

We don’t only want to retrieve the number of modules taken, we also want to retrieve the modules passed:

SELECT COUNT(quiz_results.id) AS modules_passed, quiz_results.*
FROM quiz_results
JOIN modules ON quiz_results.module_id = modules.id
WHERE modules.client_id = 123
  AND quiz_results.client_location_id = 1
  AND quiz_results.passed = 1

…and the modules passed for the first time:

SELECT COUNT(quiz_results.id) AS modules_first_time, quiz_results.*
FROM quiz_results
JOIN modules ON quiz_results.module_id = modules.id
WHERE modules.client_id = 123
  AND quiz_results.client_location_id = 1
  AND quiz_results.passed = 1
  AND quiz_results.first_time = 1

These are still simple queries; the complexity of the query isn’t the problem. However, there’s every chance that these queries will get more complicated, so it’d be good to make it as flexible as possible.

Furthermore, I don’t like the fact that we need to run the query three times in order to get the data we need. This table already has 50,000 rows in it; it’s likely to grow tenfold when the system is rolled out across the country. There’s no point running what is essentially the same query three times over.

How can we grab the counts for each new WHERE clause without running a separate query? (Or worse, a subquery)

MySQL gives us an IF() operator, that, helpfully, works within COUNT. It works like so: we give it a condition, a result if the condition is met and a result if the condition is not met. The condition will then be run against each row and return the value accordingly.

This means we can set our IF() statements up to return 1 - i.e., one row matches - if each condition returns true:

SELECT COUNT(quiz_results.id) AS modules_taken,
       COUNT(IF(quiz_results.passed = 1, 1, NULL)) AS modules_passed,
       COUNT(IF((quiz_results.passed = 1 AND quiz_results.first_time = 1), 1, NULL)) AS modules_first_time,
FROM quiz_results
JOIN modules ON quiz_results.module_id = modules.id
WHERE modules.client_id = 123
  AND quiz_results.client_location_id = 1

IF() works with other functions too:

SELECT COUNT(IF(STRCMP(quiz_results.name, "Jamie Rumbelow") = 0, 1, NULL)) AS jamie_modules,
FROM quiz_results
JOIN modules ON quiz_results.module_id = modules.id
WHERE modules.client_id = 123
  AND quiz_results.client_location_id = 1

Using IF() is a quick way to combine multiple queries into one. It won’t change your life, but it will make your application a little quicker and your code a little more DRY.

Twenty Fourteen

Twenty Thirteen was bloody good fun. It was a year full of hard work, monster hangovers, successes and failures.

Like last year, I’ve got a list of things I’d like to do / achieve / accomplish for the following twelve months.

But a quick recap on last year’s goals:

  • Train for, and run, the Cambridge Half Marathon. This did not happen. I get to 4/5 miles and hate it so much I can’t bear to continue. But, on the positive side, I’m now playing tennis 3 times a week, have been for the past 4 months and will continue to do so.
  • Drink less caffeine and drink more water. I’m drinking more water, but not any less caffeine. But I don’t care. This was a stupid idea.
  • Apply to Oxford University. I applied to read Philosophy, Politics and Economics at St. Hugh’s College, Oxford at the beginning of October. I had an interview(!) mid-December, and find out if I have a place on Jan 8th.
  • Travel somewhere new. I visited the east coast of America for the first time: Washington D.C., New York, Philadelphia, Baltimore and Connecticut.
  • Get a tattoo. Didn’t happen. Not sure it will.
  • Quietly grow Efendi. We released a couple more books; I’m moving them over to Leanpub soon to host them over there.
  • Learn to play the bass guitar properly. I’m now playing bass a lot more frequently. Still not as much - or as well - as I’d like to, but I’m getting there.
  • Start saving some money. Didn’t happen. Maybe next year?

And what about the year ahead?

  • Don’t drink alcohol. This is not a permanent move, nor is a self-pitying if-I-make-big-promises-this-hangover-might-go-away cry for attention, but it’s a lifestyle change I’m going to try out for a year. I want to see what effect it has to my health, my sleep, my productivity and my happiness.
  • Start saving some money. I still haven’t managed to work out how to do this, but, by the end of the year, I want to have £10,000 in a saving account. (Or, saved up the money and spent it travelling, either way will be a success).
  • Write another book. I really enjoyed putting the CodeIgniter Handbook together. It’d be great to write something else on a different topic. Laravel, maybe?
  • Read a load more. I’ve got a dozen, maybe more, books that I want to get through. Lots of literature, lots of political reading, lots of philosophy.
  • Visit Asia. I’d like to visit Asia for a little while, and perhaps do some nomadic development for a few months. My Dad grew up in Hong Kong; I’d like to visit there most of all.
  • Go back to San Francisco. San Francisco is, by far, my all-time favourite city, and I really need to head back.
  • Get more engaged with open source again. I miss StackOverflow, I miss GitHub and I miss writing code.
  • Improve my fluency in Spanish I can already speak passably conversational Spanish. I’d like to get a lot more confident.

It’s an ambitious list - more so than last year’s! - but I reckon I’ve a good opportunity to take a fair crack at it.

I’m going to head off now. I’ve got 32 hours of drinking left that I want to take advantage of.


I am constantly finding myself at the intersection of busyness and laziness. A million items sit on today’s todos and yet my completed counter rarely reaches double-digits. The irony of the situation is that I’m not actually that busy, nor am I that lazy. I work hard, push and rush my way through the day to get nowhere in particular. Which is a colossal waste of time. And the older I get, the more I resent wasting my time, as, the older I get, the more time I have wasted.

After battling with this problem for a while, I realised that’s it’s not me or my ability to produce great work under time constraints, it’s the system I’m using to manage that time. Or, at least, the lack of it. Sitting down at your desk expecting to be productive is a grave error and the most optimum A* path to procrastination.

In order to solve this issue, I’ve been experimenting with a hugely-popular, yet little know technique called Pomodoro. Pomodoro came into my life several years ago but I’ve only recently adopted it full-time; so far it’s been immeasurably1 useful and I can highly recommend it for anybody who ever does anything creative.

Pomodoro2 is built upon the assumption that frequent breaks improve mental agility AND longevity; your ability to work highly creatively at a high speed for an extended period of time. The gist is this: tackle a task for 25 minutes, break for five. After four ‘pomodori’, take a longer break (15-30 minutes).

Give it a go now. Break down a task you need to do soon into 25 minute intervals. Assess the complexity of the task and estimate how many pomodori you think it will take to complete. Plug your headphones in and get cracking.

Implementing the Pomodoro Technique has had a number of interesting effects:

  • I’m closing Twitter and when on a pomodoro, getting my Twitter fix in one of the five minute breaks. It’s amazing how much time I previously wasted on Twitter and how much more productive I am when I’m not checking it every few minutes. The best part is, it hasn’t been difficult. Nothing dramatic happens in 25 minutes. Twitter is pretty much the same as when I left it a mere pomodoro ago.
  • I only receive an email or two during one of these periods, which I can deal with quickly. I’m now at Inbox Zero most days. I speed-read the email, delete it or add something actionable to my todo list and archive it. It’s liberating.
  • The 30 minute break once every four pomodoros is really useful for meditation. I can fit in a session of Headspace’s Take 15 easily, and still have enough time to get a coffee.
  • I feel more productive. I know this isn’t a particularly valuable metric, but there’s certainly something to be said for getting through a list of things. I’m distracted less. I procrastinate less. I jump straight into the zone and get straight on with stuff.
  • You know that feeling where you go away from a problem for a while, return and find the answer spewing from your fingertips the moment you sit down? Yeah, that. Turns out five minutes is often all you need to break through the wall.
  • I can now bill clients with confidence. I have a record of how many pomodoros I spent working on each task, and can count them up, multiply by 25 and have a pretty much exact log of billable time. This also usually includes time emailing, something that really should be billed for but is frequently missed.3

It’s been great so far, but I’ve had to make a few adjustments to what they recommend on the website. Some pointers for those beginning the Pomodoro Technique:

  • Don’t worry about writing each Pomodoro down like the website suggests on the special sheet, and don’t worry about interruptions. It adds red tape to the process, and I’m a liberal, so I despise bureaucracy. Just keep a quick log of how many pomodoros a task takes (a post-it note suffices).
  • Be disciplined with the five minute breaks. It’s remarkable how difficult it is to pry yourself away from your tasks when you’re stuck in; the break is crucial for this to work, so make sure you take it.
  • Try to commit your code before the end of the pomodoro. Try to find something that you can consider ‘wrapped up’, even if the entire implementation is unfinished. This means you’re committing early and often, and it forces you to break what you’re thinking about down into smaller components.
  • You’ll need music. Trust me on this one.

Pomodoro works exceptionally well with programming, because it breaks up complex conceptual tasks as easily as it breaks up the mundane. It’s great for writing too - this post was written in just over one pomodoro and edited during the latter’s remaining minutes.

I’ve also found it’s the optimal unit of time for studying. I get stuck in to my topic and have enough time to glean some insight without being overloaded.

So, here’s to Pomodoro, and here’s to the faster, better, smarter, sexier time management experts we all aspire to be.

Anyway, I must be off, I have some Twitter to catch up on.

  1. Immeasurably used as an intensifier here. It’s a time tracking tool, so, logically, its usefulness is quantified and measured by how much time it saves, or how much more productive you become from using it.

  2. The word is Italian for tomato, and comes from the literal Italian ‘pomo de oro’, golden apple. Golden, I assume, because it’s the basis of most Italian cooking, and Italian cuisine is the best in the world.

  3. If I finish a task before the pomodoro ends, I round to the nearest 25 minutes. It always balances out: the worst I lose out by if I round down is 12 minutes, and the most I take extra from my clients is 12 minutes. Since I bill at minimum by the hour anyway, it’s mitigated for both me and my clients.

Rails-like HTML-friendly resourceful routing in Laravel

While Laravel 4’s routing engine is certainly powerful, it didn’t quite fulfil my desires with resourceful, RESTful routing. By default Laravel’s resource routing is geared toward writing APIs - making use of the full range of HTTP verbs - and this isn’t very helpful when working with a web browser.

Rather than replicate Rails’ resource routing, where the HTTP verb is faked with a _method parameter, I decided to write a small helper function to manually route the basic set of resource routes to a controller.

The function looks like this:

// Rails-like routes
function htmlResource($resources, $controller)
    $resource = str_singular($resources);

                array( 'as' => $resources, 'uses' => $controller . '@getIndex' ));
    Route::get($resources . '/{id}',
                array( 'as' => $resource, 'uses' => $controller . '@getShow' ));
    Route::get($resources . '/new',
                array( 'as' => 'new_' . $resource, 'uses' => $controller . '@getNew' ));
                array( 'as' => 'create_' . $resource, 'uses' => $controller . '@postCreate'));
    Route::get($resources . '/{id}/edit',
                array( 'as' => 'edit_' . $resource, 'uses' => $controller . '@getEdit' ));
    Route::post($resources . '/{id}',
                array( 'as' => 'update_' . $resource, 'uses' => $controller . '@postUpdate' ));
    Route::post($resources . '/{id}/delete',
                array( 'as' => 'delete_' . $resource, 'uses' => $controller . '@postDelete' ));

…and should be called in your routes file, where the first parameter is the name of the resource and the second parameter the controller. This will route the CRUD actions and name each route too.

One call to this function:

htmlResource('users', 'UserController');

Gives me a php artisan routes output of this:

| URI                     | Name        | Action                          |
| GET /users              | users       | UsersController@getIndex        |
| GET /users/{id}         | user        | UsersController@getShow         |
| GET /users/{id}/new     | new_user    | UsersController@getNew          |
| POST /users             | create_user | UsersController@postCreate      |
| GET /users/{id}/edit    | edit_user   | UsersController@getEdit         |
| POST /users/{id}        | update_user | UsersController@postUpdate      |
| POST /users/{id}/delete | delete_user | UsersController@postDelete      |

GROUP_CONCAT for happier, more productive JOINs

I frequently find myself JOINing two tables together for a simple many-to-many relationship. This is usually a relationship between users and groups, or posts and tags, or books and categories: the kind of relationship that you rarely access both ways but need to store as normalised tables nonetheless.

Today, I was writing a small role-based authentication system:

    blah blah blah



I want my user objects to have various booleans for their various roles - is_admin, is_staff, things like that - so I’d usually do something like this:

$user = $this->db->where('id', 1)->get('users')->row();
$roles = array( 'admin', 'staff', 'client' );
$user_roles = $this->db->select('roles.name')
                       ->where('roles.id = users_roles.role_id')
                       ->where('users_roles.user_id', $user->id)
                       ->get('roles, users_roles')->result();

foreach ($roles as $role)
    $user->{"is_" . $role} = (bool)(isset($user_roles->{$role}));

It’s not that hideous a solution, but I don’t like that extra query and I’m sure I can do better.

Turns out MySQL has a pretty nifty function for things like this where I can include my roles as a JOIN and then concatenate them into a single column. It’s called GROUP_CONCAT.

Instead, I can do this:

$user = $this->db->select('users.*, GROUP_CONCAT(groups.name) AS roles')
                 ->where('users.id', 1)
                 ->join('users_groups', 'users.id = users_groups.user_id', 'left')
                 ->join('groups', 'users_groups.group_id = groups.id')

$user->roles = explode(',', $user->roles);

foreach ($this->roles as $role)
    $user->{"is_" . $role} = (bool)(in_array($role, $user->roles));

…and now there’s no need for the extra query.

With some simple callbacks in MY_Model I can tidy it up even further:

class User_model extends MY_Model
    protected $primary_key = "users.id";
    protected $roles = array( 'admin', 'client', 'staff' );

    protected $before_get = array( 'with_roles' );
    protected $after_get = array( 'unpack_roles' );

    protected function with_roles()
        $this->db->select('users.*, GROUP_CONCAT(groups.name) AS roles');

        $this->db->join('users_groups', 'users.id = users_groups.user_id', 'left');
        $this->db->join('groups', 'users_groups.group_id = groups.id');

    protected function unpack_roles($user)
        $user->roles = explode(',', $user->roles);

        foreach ($this->roles as $role)
            $user->{"is_" . $role} = (bool)(in_array($role, $user->roles));

        return $user;

Jasper Tandy has reminded me that its return value has a length limit (1024 characters) that sneaks up on you - so be careful, but for small amounts of grouped data like my role example, GROUP_CONCAT seems like a perfect solution.

Memcached vs Redis

I have been debating with one of the developers at LogicPad about the pros and cons of various caching systems and we’ve got it narrowed down to Memcached and Redis. I’ve used Memcached a lot in the past, but am now firmly fighting the corner of Redis. I thought I’d summarise my thoughts on the matter here.

The conversation began when we were talking about message queues and using Resque, a Redis-backed message queue. Originally everyone agreed on Memcached for the caching, but after deliberating my opinion has changed.

Here’s why:

  • Performance: Redis has amazing performance, and it often out-performs Memcached. There have been a number of tests to prove this, and many others where Memcached only slightly out-performs Redis, and many more where Redis is the clear victor (http://oldblog.antirez.com/post/redis-memcached-benchmark.html).
  • Feature-set: Redis’s feature set is much wider than Memcached: you get a bunch of data types (hashes, lists), which Memcached could never support; persistence (very important, else you have to warm up the cache when you restart the server); transactions; complex operations (which means you can offload the processing to the redis server rather than make lots of IO calls to and from it) and pub/sub support. These are all features that will make our lives significantly easier down the road as well as improve our development speed immediately.
  • Reliability: Redis has been used for years and years as the main caching system for GitHub, Twitter, the Guardian, Instagram, Craigslist and Stack Overflow / Stack Exchange, among countless others. It’s a tried and tested system. Memcached has obviously been around for longer, and is tried and tested even more so; this point is just to recognise that Redis’s relative newness isn’t something to worry about.
  • Development / support: Redis is being developed and worked on every day. A quick look at the GitHub repositories shows us the commit activity is vastly different - Redis vs Memcache - and while this isn’t a poor reflection on memcached per se, it does show that Redis is a helluva lot more active. Since they’re both open source projects this is actually quite a useful metric.
  • One less instance: LogicPad is going to need a message queue… that’s inevitable. We can speed up the user experience substantially and save our servers a load of work through this technique and I’ve found it an indispensable weapon in my arsenal. If we use Resque we can use the same Redis instance (or set of instances) for both caching and queuing, which means less work and less opportunities for things to go wrong.
  • Clients: there are native C PHP clients available for Redis, and most modern caching libraries (including Laravel’s standalone cache) support it. The PHP extension is really easy to use and we’ll want to use it if we want to take advantage of Redis’s more powerful features.
  • Admin system: because Redis gives us the ability to actually look at the datastore and collects lots of information about it all, it allows us to run an internal monitoring system for it (like the sexy Redmon) which can come in very handy when developing or debugging. Memcache is simply unable to do this.

In short, I think anyone would be foolish not to use Redis at this point. It’s tried, tested and infinitely more powerful than Memcache, while being just as performant (or even more so in many cases). It’s an awesome piece of software, and next to it, Memcached emerges as a trusted friend, but one that’s getting old and tired. So I suggest we take Memcached out into the back yard and shoot it, and let Redis into our lives.

The disciple of a Sufi of Baghdad was sitting in an inn one day when he heard two figures talking. He realised that one of them was the Angel of Death.

"I have several calls to make in this city," said the Angel to his companion.

The terrified disciple concealed himself until the two had left. To escape Death, he hired the fastest horse he could find and rode day and night to the far distant city of Samarkand.

Meanwhile, Death met the disciple’s teacher, and they talked of this and that. “And where is your disciple, so-and-so?” asked Death.

"I suppose he is at home, where he should be, studying," said the Sufi.

"That is surprising," said Death, "for here he is on my list. And I have to collect him tomorrow, in Samarkand, of all places."

I just heard this story - relevant to my philsophical studies about free will - and decided it was interesting enough to blog. It’s about fate and the will of what is beyond your control.

It’s an old Islamic parable called Death in Samarkand:

Catapult into PyroCMS

Yesterday, Efendi Books released our latest title - Catapult into PyroCMS. PHP guru Phil Sturgeon has written a rather brilliant book about the popular open-source CMS, PyroCMS, that he created and develops.

The content is superb - it’s a great introductory guide to a very good CMS, with enough in there for developers and designers to get used to building and developing applications and websites using it. It also looks gorgeous, with some wonderful illustrations by the amazing Jenny Thorne.

Here’s some text from the introduction:

In this book - from PyroCMS creator Phil Sturgeon - you’ll learn the core concepts underpinning Pyro, create malleable templates and themes, utilise Streams for powering content-heavy sites and examine the basics of writing addons for the Pyro platform. You’ll learn the theory behind the CMS, how to mould the system around your content and how to use the powerful, designer-friendly template layer.

It’s a fantastic title and it’s already getting some great reviews, so be sure to grab your copy now!