What I've Learned (2 of 4)

by Michael Clark
March 20, 2012 10:10 PM

A recent project I've been working on has been the Admissions Checklist - a site for prospective students to track what they've turned in, see deadlines, and get in touch with Admissions. This has been a fairly big project and there have been several things I've had to figure out through trial and error.

One of the first things that stands out to me involves CSS positioning. In the Checklist, we've got three sliding "panels" which the user can alternate between for their profile, FAQs, and contact information.

This is accomplished using postions absolute and relative. In the markup, I have a "container" div with its position set to relative with overflow set to hidden. Setting the container position to relative is necessary to position the panel absolute inside the container, and setting overflow to hidden hides the panel when it's out of view. The container div also gets a positive z-index.

My panel divs are then given an absolute position. By default, I then use Javascript to set the "profile" div to display on load by setting it's position (top, bottom, left, right) attributes and it's z-index to a value greater than the container's. The other two divs - "FAQs" and "Contact" - are then hidden by pushing them right and off the screen and a z-value less than the container's.

When the user clicks to move to a different panel, I use jQuery animate to switch the positions/z-indexes of my panel divs.

It might sound more complicated than it is, but in the end, it provides a nice little user experience.

What I've Learned (1 of 4)

by Michael Clark
February 15, 2012 10:54 AM

I've officially been back at Freed-Hardeman for a little over 6 months and in that time there have been a few things I've learned.

The first tidbit I'd like to share is in regards to creating an equal two-column layout using CSS only. There are many "tricks" to accomplishing this - tables, javascript, etc., but I've finally stumbled across a pure CSS method. It's detailed fully in this article on A List Apart so I'll give a quick summary for the skimmers at home.

This method involves three divs - a container, a main content area, and a rail (think sidebar). The content and rail divs are nested inside the container div (It may seem weird, but the content div needs to come before the rail div regardless of what side the rail is supposed to display). Here are the important pieces of CSS (you can background colors to differentiate your columns if you'd like):

  border-right:150px solid #000;

Let's break this down into what's going on.

We have a 750px wide container with a 600px content area and a 150px side rail. The content area also gets a right border of 150px the same color as the side rail as well as a -150px margin that allows the rail div to move into it's proper place. The right border makes it appear so that as the content area grows, the side bar appears to fill in beside it (when in reality, it's just the right border). In similar fashion, if the side rail is taller than the content area, the container has the same color as the content area and fills in what's missing.

Better yet, the layout can be modified so that your column can be on the right should you so desire. The article from A List Apart also goes into detail on how to create a three-column equal height layout.

Pretty simple, isn't it?

Hello World in Node.js

by Michael Clark
November 18, 2011 10:59 AM

Node.js has been one of the latest buzz words in the development community, but what exactly is it? Node’s community wiki calls it a server-side JavaScript environment that uses an asynchronous event-driven model. I’ve been interested in learning more about JavaScript and the server-side statement was enough to pique my interest to learn more.

Since this is going to be an introduction to Node piece, we’ll go through the traditional “Hello World” piece. Here’s a snippet directly from the Node website that creates a web server which responds to every request with “Hello World” –

var http = require('http')
http.createServer(function (req, res) {
   res.writeHead(200, {'Content-Type': 'text/plain'});
   res.end('Hello World\n');
}).listen(1337, "");
console.log('Server running at');

Let's go line by line and explain what's going on with these 6 lines of code.

The first line, according to the Node API, is required in order to use the HTTP server and client. It essentially includes the http library similar to how you might include the System.IO namespace in C#.

The createServer function is from the http library and takes a callback function while returning a new server object. In this case, the callback function is listen. The listen function takes two parameters, but only one is required. In our case we provide both - the first one is required and is the port to listen on and the second is the hostname URL. 

The meat of the createServer function has two calls - writeHead and end. The first call, writeHead, has a couple of arguments we can pass in. The first argument is the status code of the request (200 or OK in this snippet). The second is an object containing all the response headers we'd like to set (we're just setting the Content-Type header here). The end call allows us to pass in the string we'd like to print out (Hello World) and signals to the server that the response is ready.

The last line prints out a simple message to the console that lets us know the server is running and what it port it's listening on.

Pretty straightforward, isn't it? If you're interested in seeing something in action, check out the Node chat room. Both the client-side and server-side were written in Javascript.

My last post was on crawling FHU.edu and what got me interested in Node was an article on scraping web pages with Node.js and jQuery. I may try that out sometime when I get the chance - especially since Node now runs in Windows natively as of November 11.

Crawling FHU.edu

by Michael Clark
October 4, 2011 8:07 PM

A project that has been on my list for a while now has been a broken link and spell check application that we can use to verify the content on the FHU website. As it currently stands, the application back-end is probably 75% complete.

The broken link portion successfully crawls the website (any website for that matter) and returns the status code for it and every link on the page. This has enabled us to find instances where a typo was made when editing CMS pages or a page existed that no longer exists. The broken link portion currently only returns status codes of 200 (OK), 301 (moved permanently), 404 (not found), or 500 (internal server error). If the page returns anything but those, the applications records a status code of 99 noting that further investigation is needed.

The spellchecking portion is what I'm currently working on and has been a headache to say the least. I'm using the Hunspell dictionary with the NHunspell .NET wrapper. Hunspell is the same dictionary used by LibreOffice, Mozilla, Eclipse, Google Chrome and Mac OS X Snow Leopard. The biggest hurdle at this moment is parsing page content so that things like HTML tags are not checked. This has been problematic just because of the sheer number of cases to find and fix.

To parse HTML content, I've come across a free .NET library called HTML Agility Pack. The library used Xpath syntax and enables me to easily select different sections of code (nodes). From there I can either remove the nodes, split them, or add to them.

Once I finish the back-end portion, I'll begin working on a front-end that will grab data from the database and generate reports detailing the broken links and misspelled words on the website. The back-end will be a task application that runs periodically.

Only1 Version 4

by Michael Clark
September 1, 2011 3:37 PM

When I was tasked with revamping the current iteration of Only1, I was initially overwhelmed - especially given what was requested. The idea was to increase the security around our password management so that we no longer relied solely on the last four of SSN and DOB to verify a user. The solution was a set of three security questions set up by the user when they first log-in to change their password. Sounds simple enough, right? Kind of.

The first question comes from a set of pre-defined questions. These were pretty generic and similar to the ones you might see on other sites (mother's maiden name, high school mascot, etc.).

The second question allows the user to type their own question and answer. This was so it would be difficult for two users to have the same security questions and thus knowing one another's answers.

The third question, and one that I initially thought would be difficult to program, involves a set of images unique to the user. When a user logs in, their username is assigned a set of 10 images - these images will always remain the same for the user and will always appear in the same order. This type of user verification is similar to what you might see on a banking website. With this third question, we can be sure that no two users will have the same answers.

Once the account is set-up, the user is given three options to proceed - the old username/SSN/DOB combo and two new methods. The first new method allows the user to enter their username and current password which would take them directly a change password page. The second new option allows the user to enter their username and then answer the three security questions previously set-up as verification before proceeding to the change password page. The existing username/SSN/DOB method still exists and requires answering the verification questions.

Once logged in, the user can also manage their account. This is helpful in the event a user feels their verification questions have been compromised or forgotten.

Only1 version 4 has been live since Monday and seems to be running smoothly so far.