The Importance of SOAP

Like any new blog that is stood up, it falls off the radar because there are more import things to focus on:

Well, the list goes on. Anyway, since my last post I didn't get to actually finishing up the learnyounode course from last year, but it gave me enough of a primer to be aware of the asynchronous nature of Node.js. I do plan to finish up that lesson along with a handful of Code Academy courses, but again it will go on the list of things to do.

Edit

April 13th, 2018 at 1:46 PM

I was only a quarter of a way through this post before turning off my Digital Ocean droplet for good. I wasn't actively using the server for anything and I thought I could save myself the hundred dollars and budget them else where. The title of the post was going to be a play on detergent and my newly acquired knowledge of Simple Object Access Protocol (SOAP).

In school we never had to touch SOAP, but got learn how it was used in television broadcasting systems, in particular at ABC. SOAP is great when the systems that are using SOAP are conforming to their Web Services Description Language (WSDL). In my line of work, I am actually the middle man, manipulating data as needed, relying on Document Object Model (DOM) traversal using ECMAScript for XML (E4X). E4X usage and support seems to be a dying, as plenty of browsers have cut off support for E4X with most recent systems leaning towards using JavaScript Object Notation (JSON). JSON is a lot more manageable and easy to manually parse with human eyes, but I feel like it lacks the built-in nature of what XML provides for verifying the structural integrity of a payload. Things like JSON Schema have popped up, mirroring what WSDLs provided to XML objects, but like with most things, we tend to bash the old methodology only to recreate them.

Standards

Jersey Barrier

Up to this point I have managed to complete another two exercises from nodeschool.io's learnyounode package, and have 5 more packages left to complete.

I started working on the HTTP Collect exercise:

HTTP COLLECT

Exercise 8 of 13

Write a program that performs an HTTP GET request to a URL provided to you as the first command-line argument. Collect all data from the server (not just the first "data" event) and then write two lines to the console (stdout).

The first line you write should just be an integer representing the number of characters received from the server and the second line should contain the complete String of characters sent by the server.

and right away I hit a road block, well a jersey barrier, figuratively of course. I read the hints too procedurally, thanks computer science classes, and thought, "Wait! You are saying I have two ways to do this but I have to do it your way?!".

Here is the beginning of the hints section

HINTS

There are two approaches you can take to this problem:

1) Collect data across multiple "data" events and append the results together prior to printing the output. Use the "end" event to determine when the stream is finished and you can write the output.

2) Use a third-party package to abstract the difficulties involved in collecting an entire stream of data. Two different packages provide a useful API for solving this problem (there are likely more!): bl (Buffer List) and concat-stream; take your pick!

At first I thought, "Okay, I'll take the high road and do it without the plugin". I started and attempted to create the solution based off the previous exercise HTTP Client and tried my first go at verifying my solution learnyounode verify httpConnect.js. Off the bat, I get their lovely didn't pass error. I sat on it for a bit and re-read the instructions.

When I went over the hints again, I had an aha moment, thinking that the choice they were giving me was based on the two packages they recommended for reading streams was to install either bl or concat-stream. So I caved and went  to bl and solved the problem, after having to figure out why I couldn't include the bl package in the first place. It was along the lines that because I installed the package using npm install -g bl instead of npm install bl.

Here's my solution using bl:

	
var http = require("http");
var bl = require("bl");

http.get(process.argv[2], function(response) {
	response.pipe(bl(function(err, data) {
		console.log(data.toString().length);
		console.log(data.toString());
	}));
});
	

It still irked me that I had to even bother downloading another package for something that seemed so simple. So I went back and realized that my first go without the package could have worked, but I was so concerned with following how they wanted me to work it out, that I missed just printing out the data.length before pringting the data! If I had re-read the instructions and took a breather, the problem would have been completed much sooner.

Here's my solution without bl:

	
var http = require("http");

http.get(process.argv[2], function(response) {
	var output = "";
	response.setEncoding("utf8");
	response.on("data", function(data) {
		output += data;
	});

	response.on("end", function() {
		console.log(output.length);
		console.log(output);
	});
});
	

Overall its only six lines longer, but at the end of the day I am not having to include a whole package just to do one function. You could argue that there are benefits, but at the end of the day like the tried and tested jQuery, if you can write it yourself without having to reinvent the whole wheel you are better off using vanilla-js.

EDIT

From: maxwell ogden @denormalize

theleovander one upside to using concat-stream/bl is that you don't corrupt multibyte utf8 characters (which happens with output += data)

After looking at the bl.js repository, it looks like the buffer is piped into an Array and then Buffer is used to concat the contents, with that being said, my += solution could potentially be replaced with an Array.push() and Array.join(), but there could be more I am missing with conversions.

Progressing?

In a week I feel like I haven't gotten much done. I have doubled the amount of positions I need to apply to, but keep putting the actual applying off.

All these 'Software Engineer' and 'Developer' positions look all the same, with only a few words changing between position requirements. Someone should get on consolidating all those job postings and come up with what an 'Associate' is, and what duties should generally accepted for that role.

Anyway, from last time I managed to finish one more lesson from nodeschool.io's, got my free year of Unreal Engine 4 thanks to the Github Student Pack (if you are still in school you should definitely sign in up, I am actually hosting this blog using the free $100 credit for DigitalOcean), and even managed to start riding my bike again.

I am trying to keep my head up, as well as play some games on the side; League of Legends (don't worry I'm not even ranked) and Borderlands 2.

Going to try my best at:

  • Completing those node tutorials
  • Go on another bike ride
  • Fill out some applications
  • And learn the ever magical, python

Also, check out nnkd's awesome new Dex UI via reddit.

In Between

It has now been almost a month since my last final of my undergraduate career and it almost seems hopeless. I had a shot at a field I was semi-interested in and got my hopes up. That was followed by headaches and promises that ended up falling through. I have been lucky enough to at least get my foot in the door of these companies, but it doesn't seem to be working out.

What I can tell you, no matter how prepared you think you are (submitting applications since August 2014), only time will tell when you will land that sweet job.

At bat: