Wednesday, December 14, 2011

Even more WattDepot

After my previous post where I reviewed another team's project, I found myself working on actually enhancing the system.  The additional commands to be implemented were:
 
1. set-baseline: gathers energy consumed in hour intervals over a period of 24 hours to use as a baseline for further comparison
2. monitor-power: prints a timestamp and the current power continuously in user specified intervals
3. monitor-goal: this monitors the power and compares it to the baseline set from the first additional command.  A user picks a percentage reduction which monitor-goal will use when comparing the current power to the baseline to see if the reduction is sufficient.
 
We were able to finish all the new commands and have them implemented,  I believe as of this point, there are still a couple of bugs in the system but they are minor and can be fixed without much trouble (is what they always say right?)  But overall, each command works and the system compiles.  When looking at our enhancements with regards to the Three Prime Directives (you remember them from the previous post right?)  it is my humble opinion that they still fulfill them.  Our enhancements have an obvious benefit for the user, especially monitoring by goal.  We continued to add comments and test cases for each new enhancement meaning development can be continued smoothly.  Issues were created and fulfilled letting everyone know who to contact for what part of the system.  The wiki pages were updated to reflect the enhancements. 
 
One thing that I want to mention briefly is how surprisingly difficult it can be to work with someone else's code.  In their design, an instance of each command is created every single time you type something in to the command prompt.  Also, each command is supposed to return a boolean value determining whether or not the command ran successfully.  This was fine for the previous commands, but once we had to add in set-baseline, which stored data somewhere, things had to chagne.  At first I wanted to just use a normal hashtable to store the data by tower.  But I realized first that under the current design, I couldn't return a hashtable or anything because it wanted a boolean.  And then, because it remade each command every time, I was unable to store the hashtable in the command.  In the end, I had to go a step above the commands and store the hashtable there.  I wanted to just send the hashtable as a parameter to the command function, but I couldn't because then any command could access the inner content of the hashtable and mess with data.  While it may be fine since I want to change the hashtable anyway, it's bad practice to allow such things and I had to change my own design.  It turned out to be a simple matter of making an encapsulating class for hashtable and then passing it, along with some get/put methods in to work with the hashtable without actually having it. 
 
While this seems like a simple problem at first, because I was working with someone else's code I was not immediately aware of the specifics of the system.  Therefore, it took me a while to figure out what was wrong and why.  Of course once I did, it seemed very obvious, but it still took longer than it would have had I been working with my own design the entire time.  But learning to do these kinds of things I feel is very important because as systems get bigger and bigger, it becomes impractical for a single person to work on everything.  So it's good practice for me to be working with someone else's system and learning to think like they do.
 

Code Review

After having a chance to work on my own team's Hale-Aloha-Cli, I now have a better understanding of what it's supposed to do and the methodology involved in doing the project.  As an exercise in looking at other people's code, I went through a technical review of another team's implementation of the same project using the same sort of techniques.  To be more specific, they are under continuous integration and are managed through issues.

I touched upon those topics briefly in the previous blog, but for those too lazy to read again, or for the first time, here's another description of it.  For those who know this, skip on down to see yet another overview of the project and then the actual review of it.  So continuous integration means that the project is worked on at all times.  This seems pretty self explanatory but it has a couple of major problems and solutions involved.  First, because multiple developers are constantly working on the same source, if two people download something, work on it individually, and then try to upload it, there may be some errors in each other's additions due to the concurrent development.  So we have a system that will check additions, tell us if there's a difference we need to look at before uploading, and tell us when our version isn't the most up to date.  Therefore, continuous integration.  To get something like this to work, it involves having everyone upload pretty often, because the bigger a change is, the harder it is to make sure everything works with it.

To go with that we used issue based project management and google project hosting.  We split the project into small pieces that we could do in a day to keep things running smoothly. Ideally, we'd have enough issues for everyone to work on something at all times without getting bottlenecked.

So! On to the review.  Almost.  WattDepot, as you hopefully know, is a project for gathering, storing, and manipulating data taken from energy sources.  It's currently being used in some of the dormitories at the University of Hawaii to keep track of how they are conserving energy.  Also, students are shown some graphs and figures, and are given incentives to save energy and so far, it seems like the project has been a success.  The project we and other teams were working on is a command line interface for the WattDepot.  So each implementation should feature a command line and a couple of useful commands for accessing data gathered by WattDepot.  They are:

help: provide a helpful message explaining the usage
current-power: returns the current power
daily-energy: gives the energy given in a certain day
energy-since: gives the energy consumed from a certain date to the most recent data
rank-towers: order them by energy consumed.
quit: exit the system

To evaluate any program, I mainly look at what are considered the three prime directives of any piece of software.

Prime Directive #1: The system successfully accomplishes a useful task.
Prime Directive #2: An external user can successfully install and use the system.
Prime Directive #3: An external developer can successfully understand and enhance the system.

Here is the system I am reviewing.
For the first directive, I downloaded the latest distribution here.  Everything compiled correctly and I was able to run the program.  I tried each of the commands and they all worked for the most part.  Rank-Towers sometimes didn't work 100% properly, but as a whole, the project did a good job at accomplishing something useful.
For the second directive, I at first felt that I could immediately say they did it, because I was able to compile and run everything.  But I'm very familiar with how it works and what it's supposed to do.  For an average user, this may not be the case so I looked at their project page to see if there was at least an overview and instructions on usage.  Both were found, as was a jar file so users will be able to use the cli even without a JDK.

Lastly, I had to check the documentation and structure of the project to determine if an external developer can successfully understand and enhance the system.  On the project page, there is a developers guide which gave clear instructions and guidelines on how to use the system.  They also included javadocs, which I was able to easily compile and build which gave further information on each part of the system and all pertinent methods.  Finally, the code itself had a lot of comments which made it easy to see the logic behind the development.  The structure was modular, so I could make a new command without breaking anything else, which makes further development easier.  Each class had an associated test class to ensure that further development won't break something (or if it does, you will immediately be aware of it because the test will fail)  Some parts did not have 100% coverage, but that's mostly due to some try/catch statements that are never caught because they always work correctly or handle an error before reaching the catch. 
In addition to having good documentation and instruction on how to develop the code, which developer was responsible for which part of the project was clear due to the issue based system that google hosting provides. By looking at which issues were completed by who, I could easily tell who I could ask for help if I didn't understand a certain part. Interestingly to note, of the three developers, two did the brunt of the work while the other kind of slid along.  While this isn't really relevant to the directive, it makes me wonder if maybe he was in charge of divying out tasks or some other effort not immediately obvious, or if he was just lazy.  Overall though, the system is clearly able to be enhanced easily.

And with that, the program fulfills all three prime directives.