Part 1 – NodeJS: The good, the bad and the Javascript

I like Javascript. Quirks and all.

It’s a frustrating language that allows for a quick feedback loop and is so flexible that you can bend it 270 degrees and still achieve the same result.

I call it frustrating because it doesn’t matter how much effort you put into making your software code look nice and readable, you will have to make way for the quirkiness of the language so it can do what you want it to do.

But I like Javascript. And I like NodeJS. Not because it’s the new kid on the block, but because it is indeed, very cool. With its quick feedback loop and thousands of packages available at npmjs.com you can find a solution for pretty much anything you need.

I’ve been in a team where we’ve been ‘Noding’ for the last four months, doing a multitude of different things in order to deliver a few microservices. During this time we had to map domain models, manage memory leak issues, pipe data from third party services and ensure that everything is deployed in the cloud.

I thought it would be useful to collate some information from these last four months into a blog post. Or a small series about it. That’s why I’m going to split this into 3 parts: The Good, The Bad and The Javascript.

As it turns out, there’s so much stuff out there that this post became some sort of ‘tools to keep on the back of your mind when developing in NodeJS’ kinda blog post. Hope you find it useful.

The good

Performance

Blazing fast: as developers we want our unit tests to run fast, our build to take less than a minute and our coffees to keep hot the whole morning.

The feedback loop from NodeJS is a developer’s dream: we have a couple of projects using different frameworks on all of them and our average build time is about a minute on our build box:

  • Checkout from repo
  • Run code quality checks
  • Run unit tests
  • Run code coverage
  • Run UI tests
  • Run acceptance tests
  • Zip it up

In terms of processing speed, being a non-blocking single threaded, async process makes the difference. There are tons of benchmark tests of NodeJS against J2EERails and even io.js. Suffice to say that an Express app can easily handle 500+ requests per second with an average response time of 150 milliseconds. Hell even Walmart is running on NodeJS serving millions of requests.

What is interesting is the fact that io.js, after being integrated into the Node Foundation, is now about to merge all of its performance goodness back into Node. The whole journey seemed a bit dubious at first but now it feels like it was necessary: without the split things probably wouldn’t have moved as fast and all the goodies introduced into io.js would have taken a lot longer to make it into Node itself.

Flexibility

Within our team we had an initial discussion of what would be the most beneficial: using npm scripts or write our own bash scripts to execute the tasks we needed. As it turns out you can do both – and we ended up using both. You can even write a JS file, put #!/usr/bin/env node at the top and run it like it’s a bash file.

What I like about the node environment is this flexibility. Npm in particular allows you to hook any kind of command in your npm:scripts section. It is so flexible that a strategy has been outlined by Keith Cirkel late last year to replace build tools in favor of npm:scripts. I tend to agree with many things he wrote and I like the approach he proposed. A lot has been written and discussed around Gulp vs Grunt vs npm vs fruit salad…and, ultimately, projects and their needs evolve thus it suffices to say that no approach is the right one in the Node world.

But the ultimate flexibility is the ability to have dependencies that rely on native libraries. You can add them to your package.json and npm will automatically compile the stuff for you. Cool huh?

Recently a new dependency showed up for building from native libraries: cmake-js. It relies on the very popular Maketool instead of the outdated gyp and provides a much better building environment – gyp relies on old software and Google is moving away from it.

There’s a catch, though: native development libraries have to be installed and they will cause your build to fail if they are not available. Plus they do take some time and you probably want to cache them.

CPU & Memory consumption

Well, it really depends on what you do. As far as CPU goes, running a clustered web server (such as Express), very little is actually taken. As I mentioned earlier we have different frameworks in our architecture and we are deploying all of them on micro instances in AWS as what they provide is more than enough for the Express apps to run in. As a matter of fact they are so snappy that upgrading them right now would be a waste of resources – and money.

Sure some of them connect to databases. Sure some of them need a bit more grunt. But I’m not talking about Tomcat here nor even Jetty. Running a micro service in Node is actually very inexpensive as far as CPU goes and as long as your service continues to run in a stateless fashion you will be fine.

Memory-wise is about the same. We had to build a data slurper that connects to a third party service to retrieve and transform their records into our model. We are not talking about much – about 10 thousand records – but it does make a difference how you build it. Using promises proved to be very effective as we could process the records coming in in a pipe fashion making it easy to hook maps, transformers, injectors and everything else while keeping the memory consumption low and CPU usage below 20%.

There are tools to help you manage those though. PM2 is a great example: it provides a bunch of goodies but one of the best options is the ability to monitor your cluster, how much memory and CPU it is consuming and have the terminal free for you. The way it manages the Node processes is truly great. Recently they have even released a new tool named pm2-webshell. Just do yourself a favor and check it out.

As with anything just be careful on which tools you pick to run your job as you may end up with CPU spikes as I’ll explain later on.

Testing

Mocha is the king of NodeJS testing. Very similar to rspec in terms of structuring (with before all and after all hooks, as well as context) and powered by a multitude of testing extensions you can’t go wrong with it.

Make sure you explore Mocha: you can check for memory leakage, have different reporters and setup your test environment before loading any test by loading plugins. A sample file would be something like this (mocha –help will you get you the answers for the parameters below):

-r test/support/testPrepare
--recursive
--inline-diffs
--bail
--check-leaks
-A

Tools like Chai and Sinon are a must have in terms of testing:

  • Chai, a collection of assertion libraries, provides so much goodies that you hardly have to write your own. In fact it’s so complete that even writing unit tests become fun again.
  • Sinon provides stubbing, mocking, spying and fake timers. It’s a powerful library which I tend to use very often but there’s always some confusion as to how we should use it: mocks and stubs are often confused thus leading to misuse. Verifications of mocks are also a bit daunting – very Mockito-like – in a way that we setup everything first then we verify our mock when executing our it() block.

Note on Sinon: they are removing mock when version 2 comes along, so update your tests people!

Opposed to Jasmine, where you only have spies and it provides pretty much everything out of the box, I believe Mocha is a lot more flexible: integration for third party extensions such Sinon and Chai is easy and you also get extra hooks to better write your test.

As far as testing micro services go we also had to write some acceptance tests and, being a micro service, there are integration points everywhere. For this, this little tool called nock will help you lots: it allows you to mock the requests being executed and you can manipulate them as well. The only drawback I found is the fact that you can only mock one endpoint: if you have two integration points you will have to filter the scope and allow the request to pass through a regular expression. Kinda fiddly but it works.

Also with testing you probably want to run code quality tools such as Code Coverage and style. For Code Coverage you can rely on Istanbul, which is very flexible and – as you probably expect – allows you to comment your code in areas where you don’t want to run code coverage. Or exclude files or directories, etc.

JSHint and JSCS are a given to keep your code consistent. And you can configure them as you see fit as well. Hint: specify your rules from the start, don’t wait. When you lint your code it will give you the defaults but the defaults seem too open – at least for me. A good JSHint file will include cyclomatic complexity and possibly enforce some good practices such as triple equals and curly braces.

Run, kill, reload, debug … automagically

Yes, how cool is it that you can just change a file and automagically your changes are already live? And if you process dies, what about bring it back up without the hassle of having to start it yourself? Cool huh?

Node provides a few ways to do that and they are all easy, installable dependencies.

A lot of people prefer forever but we found that forever doesn’t actually do a good job in terms of memory management. If you have the auto reload function enabled and you have a compilation error it will eat your CPU. In fact it will become so slow that even to switch between windows will be a pain.

After some research we found that pm2 is a much better alternative. It provides better memory management and better admin tools – which helps a lot when running in cluster mode. Plus it’s in active development whereas forever’s last commit was about 4 months ago.

Lastly for debugging simply use node-debug. It allows to use the dev tools in your browser to inspect variables, fiddle with the console, etc. I’ve had some experiences with it where it was not so accurate as far as line numbers go but after updating the version (installed on my global) things stabilised.

That’s it for the good parts! Stay tuned for the bad parts. Until then, cheerio!

Read Part 2 – Node JS: The good, the bad, and the Javascript.

Read Part 3 – Node JS: The good, the bad, and the Javascript.

Source of blog: http://tarciosaraiva.com/2015/06/12/nodejs-the-good-the-bad-and-the-javascript-part-1/

Want to know more about how DiUS can help you?

Offices

Melbourne

Level 3, 31 Queen St
Melbourne, Victoria, 3000
Phone: 03 9008 5400

DiUS wishes to acknowledge the Traditional Custodians of the lands on which we work and gather at both our Melbourne and Sydney offices. We pay respect to Elders past, present and emerging and celebrate the diversity of Aboriginal peoples and their ongoing cultures and connections to the lands and waters of Australia.

Subscribe to updates from DiUS

Sign up to receive the latest news, insights and event invites from DiUS straight into your inbox.

© 2024 DiUS®. All rights reserved.

Privacy  |  Terms