First dive into Domino and Node.js

Wednesday, October 10, 2018 at 10:52 PM UTC

Man, what a 36 hours!

We had the V10 product launch, well celebrated yesterday in Frankfurt (Germany) and perfectly streamed for all us home-bound spectators. Thanks for that, it was very entertaining!

This day’s morning I downloaded all available packages of V10 which included the server for Windows and Linux and the clients for Windows only - no Mac client at the moment. The IBM AppDev Pack which contains all the Node.js related stuff was only available via the beta forum - it should have been, I wasn’t able to download it at all. But thanks to a friend from the yellow bubble I got it anyway.

Node.js is currently only available for Linux servers - I am happy with that as I prefer Linux over Windows. The installation worked like a charm, the so called „Proton“ addin now runs on my V10 server in a VM - without SSL and authentication, which is fine for testing only.

During the past few hours I was able to use a native Node.js app to create and read data from a database (the shipped demo and also from my own NSF which is my blog). I am not an experienced Node.js developer but I managed to create a simple app that displays data in a list and a single document once you click on an entry. I used DQL to fetch the data.

There are some things I learned today:

  • promises and async are king
  • bulkReadDocuments only retrieves 200 entries
  • getting MIME from a field is currently not documented and doesn't work as expected
  • EJS as template engine is great and smoother than e.g. Jade (just my 2 cents)

I put up a repo for you to maybe get an idea of how I worked it out. It’s not the sexiest code of all times but it works. Keep in mind that I use my blog NSF as the source so your results may vary.

You can grab it here to maybe get some inspirations: https://gitlab.com/obusse/domino-node-list






Latest comments to this post

Karsten Lehmann wrote on 11.10.2018, 14:40

There's no dynamic sorting yet. This would have to be done in Node.js.

So you run a DQL query against an NSF, get back a resultset of 20.000 documents that you fetch via 100 requests with each 200 entries between Node.js and Domino, store everything in main memory of the Node.js process and sort it. Next you compute which of these documents fit into the page requested by the browser, return them and throw away the remaining data.

Raise your hand in the beta forum if you think there are better ways to do this :-).

 Link to this comment
Oliver Busse wrote on 11.10.2018, 12:08

Mark, you can page through the results. There are optional parameters for the bulkReadDocuments method called "start" and "end" but I didn't try them, yet.

 Link to this comment
Mark Leusink wrote on 11.10.2018, 11:14

Thanks for this!

So 200 entries is the max no of results you can read? Or are you able to 'page' the results?

And can you pass a parameter so the results are sorted (according to a view column maybe)?

 Link to this comment
Sean Cull wrote on 11.10.2018, 09:08

Thanks Oliver. Input like this ( or similar from IBM/HCL )  will be so important for adoption.

 Link to this comment

Leave a comment right here