Hostmonster auto update IP address of subdomain

With my Hostmonster account, I host this website with my www.jasonernst.com domain, but I also have many other machines that are referred to with subdomains. For example, dev.jasonernst.com is my home machine and lab.jasonernst.com is my office machine at school.

However, as you can imagine with the home machine in particular the IP is prone to change occasionally since it is given from the ISP using DHCP. So I searched around for some script to be able to change the zone file in Hostmonster since this controls the IP addresses of all my subdomains.

** Of course the obvious and easy way to do this is with some kind of dyndns account, but I’m picky and like to have everything working under my own domain 😛

I was looking for some type of SSH script way to do it, but it looks like there is no easy way to do this, but I found a ruby script here: http://deathwarrior.wordpress.com/ which allows me to do what I want. Unfortunately the script was written a little while ago and some of it needs to be changed a bit to work.

The first part is the required libraries that must be installed. This has changed because the ruby-mechanize library does not seem to be found in the Ubuntu 12.04 repos. This should do the trick though:

sudo apt-get install libwww-mechanize-ruby ruby-json ruby

The other problem is in the ruby script itself. It seems verify_mode does not exist anymore, so just comment that line out. Another problem is the user-agent which does not seem to be found in the version I was using with Ubuntu. So you can change that to “Windows IE 7” and it should fix it. Here is the code with the changes:


#!/usr/bin/env ruby

require 'mechanize'
require 'json'

USERNAME = 'MY_USERNAME'
PASSWORD = 'MY_PASSWD'
DOMAIN = 'positrones.net'
SUBDOMAIN = 'my_subdomain'
ADDRESS = '0.0.0.0' # If nil then try to get automagicaly

URLS = {
'login' => 'https://login.hostmonster.com/?',
'drecords' => 'https://my.hostmonster.com/cgi/dm/zoneedit/ajax'
}

USER_AGENT = 'Windows IE 7'

m = Mechanize.new do |a|
#a.verify_mode = OpenSSL::SSL::VERIFY_NONE
a.user_agent_alias = USER_AGENT
end

def get_ip
r = Net::HTTP.get('jasonernst.com', '/ip.php')
ip = r.match(/\d{1,4}\.\d{1,4}\.\d{1,4}\.\d{1,4}/)

ip[0].to_s
end

# Do the login stuff
print "Checking user and password... "
page = m.get( URLS['login'] );
form = page.form_with( :name => 'theform' )
form['login'] = USERNAME
form['password'] = PASSWORD
send_button = form.button_with(:value => 'Login')
form.click_button(send_button)
page = page.links[0].click
puts "done!"

# Edit the DNS Zone
page = page.link_with(:text => 'DNS Zone Editor').click
print "Getting old zone records... "
json = JSON.parse( m.post( URLS['drecords'], {'op' => 'getzonerecords', 'domain' => DOMAIN} ).body )
puts "done!"

print "Trying to get subdomain old info... "
json['data'].each do |r|
if r['name'] == SUBDOMAIN
print "done!\nSaving new address[#{r['address']} => #{ADDRESS||get_ip}]... "
json = m.post( URLS['drecords'], {'op' => 'editzonerecord',
'domain' => DOMAIN,
'name' => SUBDOMAIN,
'orig__name' => SUBDOMAIN,
'address' => ADDRESS||get_ip,
'orig__adress' => r['address'],
'ttl' => r['ttl'],
'orig__ttl' => r['ttl'],
'Line' => r['Line'],
'type' => r['type']}
)
json = JSON.parse(json.body)

if( json['result'] == 1 )
puts "done!\nAddress changed succesfully!"
Kernel::exit(0)
else
puts "\nAn error has ocurred trying to save new address :("
Kernel::exit(1)
end
end
end

puts "The subdomain #{SUBDOMAIN} cannot be found!"
Kernel::exit(1)

The last thing that should be done is a server page somewhere for this code to find out it’s own IP address. All the page should return is the IP address itself, not any HTML. You can leave the default in the script which will use my IP script, or you can make your own. It’s as simple as this:

And one more thing, if you are so inclined is to add the script as a chronjob. I have mine set to run every hour, but you can do it more or less often according to your preferences.

crontab -e

I named put my script in a /scripts/ folder I made and named it “hostmonster-auto-ip.sh”

So the crontab entry is:

@hourly /scripts/hostmonster-auto-ip.sh

Blog stats

This blog has been running for quite a few years now and I got thinking about the traffic patterns on it today in an earlier post, so I thought I’d put some of the info together into a post 🙂

Here are the visits in the last three years (74,000 hits)

And in just the last year (30,000 hits)

The average time on the site remained about the same when looking back one or three years and in both cases is about two minutes.

Here is a world map showing the countries where most of the traffic comes from. The top three were US, India and Canada. The darkest colour represents almost 6000 hits.

The most popular browser with visitors of my site is Firefox, followed by Chrome, IE and Safari. I was a bit surprised that Safari did more poorly than IE, but I guess it can be expected when you see the next figure with OS’s.

The most popular OS’s for my site: Windows, Linux, Mac. This makes me a little sad that Linux wasn’t the highest proportion, given the content of my site…but it was at least respectable.

Lastly, here is the posting frequency. There have been 83 posts total according to the wordpress admin page.

Including Blogs in Tenure & Promotion

Inspired by this blog post: http://andreweckford.blogspot.ca/2012/10/would-you-include-your-blog-in-your-t.html and the ensuing twitter conversation: https://twitter.com/andreweckford/status/260737094056542210 I decided to write a more in-depth response on my blog.

Compared to the rest of the group in the twitter convo, I am probably the most junior being a grad student still (the others are librarians and professors in various stages). The general consensus seems to be that while a forward facing department may consider a blog in their criteria, there are not clear cut rules to support blogs in the process and most people would not include it.

When I get to that stage in my career, however, I’ll try my best to shoehorn it in somehow. I’m of the opinion that any modern academic department should be considering blogs and even social media accounts in the T&P process. These types of things show how engaged an academic is with the public. We need more engaged professors and research students. The general public pays for our work (for the most part) and we should make them aware of what they are paying for.

As mentioned in the twitter conversation, the benefits of blogging and social media alone in terms of connections and collaborations with other researchers end up showing up on the resume anyways with publications and projects. However, I think there is also value in the quality of a blog itself.

As for the less professional posts, I think these also have some value. As long as they aren’t the type someone should be embarrassed about, I think they also still add value. They humanize the professor for the students and the general public and make the professor more approachable. Sometimes its also nice to read these peoples’ opinions on issues outside of their expertise.

My blog has been going for about three or four years now, and while I think the quality of the posts are not as good as the others in the Twitter conversation, it has some value since it has documented some of the journey through grad school, some of the work I’ve done and some of my thoughts that would otherwise just disappeared into the ether. Clearly this is something people are interested in since I regularly get around 2500 hits per month on the site. These types of statistics may be the type of thing to include in T&P files. The number of hits, followers on twitter, retweets, “klout” – all show the level of engagement of the professor. On the other hand, this could just lead to a race to the bottom of constant meaningless social engagement, but I’d still prefer a prof who tweets too much to one who is alone in their bubble.

Recent Computer Related Events

The last few months have been crazy.

I’ve had a couple of job interviews with tech companies in California, attended a BB10Jam, released a couple of playbook apps, started on some side research projects at school. So this post is all about talking a bit about each of those things.

Job Interviews in Grad School
If you are like me and don’t like to say no to anything, you’ll probably go along with the whole interview process – just to see where it takes you. Also if you are like me and a busy grad student – you probably will not have time to prepare properly for the interviews. This is probably a big mistake (or maybe not). From my perspective, the technical questions in the interviews went pretty well when they were directly in my areas of expertise. As soon as they meandered to peripheral areas, things got a little ugly. Worse still was the coding questions. While I do try to keep up my skills in this area, grad students (particularly PhD) students in comp sci aren’t always well known for being code monkeys. We spend most of our time reading papers, proposing algorithms and mechanisms for solving problems. We then either 1) Hire an undergraduate student for the programming or 2) Make use of as many open source tools as possible and “macgyver” them into suiting our needs. In the extreme case we may code some specialized tool for what we need as well. So as you can guess, my coding skills were rusty. If I were to prepare in the future, I’d focus on this for as much time as I could before starting the interview processes. But since these interviews all contacted me first, that wasn’t an option.

BB10Jam
Despite all the doom and gloom in Waterloo these days about RIM, I still like to develop apps for the Playbook for a few reasons. 1) RIM lives where I live. 2) I got a free Playbook at a conference. 3) The AppWorld is not saturated with a million of the same type of app yet.

 

So after developing a couple of apps I noticed they were having a BB10Jam here in Waterloo and went. The event was really good. The people were friendly and nice, and it was much less stuffy than some of the previous RIM events I have attended. Maybe it is a sign things are starting to reverse. Beyond just that though, the platform looks pretty good. I also got a developer device and am working on some apps for it now. The biggest selling point in developing for them right now for BB10 is the $10,000 reward for any app that can make $1,000 in the first year. Also at the event there were many stats showing increasing numbers of developers and apps in the store which all sounds good. Another interesting bit of info was that BB users are more likely to pay for their apps compared to other platforms. All of this makes it increasingly attractive to develop for the platform. So at the end of the BB10Jam I came away pretty excited to get at it. Just need to find the time now!

BB Playbook Apps
If you have read this blog previously, you may noticed something about one of my apps. The first one I made was a telnet application. I had the intention to also include ssh, but have not had the time to include this bit of code yet. The biggest challenge with that app was developing a console library for displaying the text. At the time Cascades was not available yet, and this type of thing was a bit tricky. There’s more info on it here: http://www.jasonernst.com/2012/01/24/opengl-console-library-for-blackberry-native-sdk-playbook/. The app is available here for free on the appworld: http://appworld.blackberry.com/webstore/content/77778/?lang=en

The second app I created is a web server. The inspiration for this was that I wanted to be able to serve images from the camera onto a website hosted on the playbook. This way you can leave your playbook setup as a sort of security camera. So the first step was creating the web-server to serve the content. Right now it just supports serving static content – no php, perl, python etc, but maybe in the future. The next step will be to hook into the camera and serve up a continuously changing image that is polled from the camera regularly. The app is available here: http://appworld.blackberry.com/webstore/content/124979/?lang=en

Side Research Projects
The last thing is the side research projects I have taken on in the past couple of months. I have been collaborating with researchers in the math department on two projects. First is a cognitive agent project. In this project we are interested in the intelligence required for a creature to cross a busy street. The first creatures are oblivious to the conditions in which danger may exist so they cross the road. As some of them are hit, the creatures which follow can observe the conditions and avoid crossing. Eventually they start to learn when it is safe to cross. The focus of the work is the underlying learning mechanisms for this problem.

The second project I am involved in is a vehicular traffic simulation tool. An existing tool was developed several years ago, however there were some shortcomings in the model. The model is based on the Nagel-Schreckenberg traffic model, however the ramps on the highway were only one cell. Because of this limitation, it becomes difficult to properly simulate certain highway configurations such as clover-leaf ramps. My recent work on this project has extended the model to address this limitation.

IEEE ICC 2012 – Ottawa

Last week I presented a paper at IEEE ICC, and since its been a while since I have posted I thought I put up a bit of a summary about my work. For people who have looked at my site a bit, you might know my research is on Heterogeneous Wireless Networking (making wireless technologies work together seamlessly). The goal of my work is to enable devices with Bluetooth, Wi-Fi, 3G, 4G, LTE and future radios to be able to switch easily between connections or use more than one at a time. There are still many problems which make this impossible right now. For instance, if you are using Skype on a Wi-Fi connection on your phone and you leave a building and switch to the 4G network outside, chance are you will be disconnected from Skype (Seamless Handover is not supported).

Another problem may be a lack of co-ordination between radio access technologies (RATs). For instance, while Bluetooth supports adaptive frequency hopping (AFH) to try to avoid the same channels Wi-Fi is being used on, this may not be enough to ensure interference between the technologies is avoided. What happens when you are in a dense area where Wi-Fi is being used across all channels, or there are many devices? (Apartment buildings, dense cities etc.)

As you may know, lots of wireless research is done using simulation because in many cases it is faster and cheaper. Many simulation tools support interference within a particular technology (ie Wi-Fi nodes interfering with other Wi-Fi nodes) but not many support interference between technologies (ie Wi-Fi nodes interfering with Bluetooth nodes). The paper I presented at ICC tries to understand how much of an impact this makes.

I’ll spare most of the details since they are in the paper, but essentially the findings are that Wi-Fi -> Bluetooth interference causes a drop in around 10-15kbps of the total 50kpbs Bluetooth was able to achieve in our lab setup (About 30% drop).

In the other direction, the interference was mostly insignificant. However, this was expected since the setup used close ranges (Wi-Fi power is much greater than that of Bluetooth).

The future work includes looking at varying distances (It seems like it will be interesting when we use a range that makes the Wi-Fi and Bluetooth powers similar). Eventually the goal is to create an interference model that can be used in simulation. If you want more details – see the attached paper and slides.

icc2012 (slides pptx)
icc20122 (paper pdf)