Search



Contact Us

Remote Assistance


Powered by TeamViewer

Rate Us!


How did we do? Rate us on WOMO!

Microsoft BPOS vs Google Apps – My Move to “The Cloud”

You have probably heard the term “The Cloud” and how it is the way of the future for IT but what can it do, how does it work and why should you consider it? Google and Microsoft are two companies that are investing in web based services for business. I have tried both in a couple of different ways and discovered that you get what you pay for.

Many small businesses I work with have grown from nothing with minimal I.T. knowledge until they have reached a point in their business life cycle where things start to get harder. Changes to systems become painful to implement, things are no longer working properly and they blame their I.T. While it can be argued that I.T. is in fact the problem, it is more due to poor implementation of I.T. rather than technology itself. Usually when this happens, it is time for some centralisation of services and files. Enter the cloud.

Historially, the usual step at this point was to hire an I.T person, spend a few thousand dollars to put in a server, upgrade the network, and start to think about how it is supposed to work and make it happen (central Anti-Virus, central shared storage, network backups, perhaps an internal email server, domain controller, automated policies etc). This is still quite common, I am still doing these types of rollouts myself but is it really necessary? A few years ago, yes it was but now there are some alternatives with Cloud Computing (such as offerings from Google Apps, Microsoft BPOS, HyperOffice, Salesforce and many more). Basically the business decides what it needs from a storage, communication and collaboration perspective and simply subscribes to these services online (in “The Cloud”).

There are some down sides to working in the cloud. You need a reasonable internet connection, your data access will be slower than a local server, some functionality may be limited, security and privacy is not totally in your control etc. There are also many up sides to operating this way. You dont need to finance a server (monthly fees are often far easier to fund), you can quickly and easily scale the services with your business growth, your data is managed and backed up for you, you can access all your services from anywhere on any computer with an internet connection and more.

My own use of cloud computing for business began with Google Apps for business, the free version, and only with email. Using Google Apps I was able to synchronise my desktop, laptop and mobile phone email and calendar at all times, something that is only possible with some central control (eg a server). I then began to use Google Docs for file storage. The free version of Google Apps is very good for a free system but moving up to the Premier edition gives more storage space, no ads and access to the Google Apps Sync tool for Outlook. This works pretty well and I was happy until I began my first client implementation…

The problems with Google Apps began, in part, with the slow internet connection upload speed in the office. Trying to push gigabytes of email into the cloud took a considerable period of time during which a significant amount of email just was not available. It took nearly two weeks before email sync stabilised.

Problems then followed with synchronisation between Google Apps and mobile phones, in this case iphones. Email worked fine but there were many issues with contacts, they would fail to sync, often they would delete off the phones and then re-sync, contacts were not replicating back from the phone to Google Apps and then to the desktop (contacts added to the phone would be deleted on the next sync) and a few other quirks. The contacts sync was only solved by manually exporting all contacts from all locations to a local CSV file, manually editing it to ensure all formatting was consistent, deleting all contacts from Google Apps directly, waiting until the sync deleted them from the phone and desktop then importing directly into Google Apps from CSV. Once this was done, contacts began to work reliably.

The next issue was the email limitations that applied, mainly the 10MB message limit. Another client had problems with the number of recipients per email as well. The final straw though was when a key email account was shut down for 24 hours without warning “due to suspicious activity”. There is then no-one to call and no way to speed up getting the account unlocked. There is supposed to be an email address, ‘[email protected]’ that you email to fast track an unlock but it didn’t seem to help.

I have since moved to Microsoft BPOS and after migrating with the $10USD/account service from Migration Wiz and moving my MX records, I am now happily online with Exchange and Sharepoint for $17AUD/account/month. The online setup was not the easiest, especially as the local BPOS system is managed by Telstra but now it is operational, it is working without a hitch. There is no need for a sync client for Outlook or phones (that include MS ActiveSync) and a “Single Sign On” app runs on my PC’s so I dont need to log in each time. It is roughly 3 times the price of Google Apps (when you include Sharepoint as well) but based on my experience so far, it is worth it.

I have since begun moving some clients to Microsoft BPOS and the feedback has been very positive. Personally I now seamlessly sync a desktop and laptop PC, a Macbook, an iPad and an Android phone (I finally ditched my old Nokia E72, actually I ditched my telco, Three, after their dismal performance recently since the merger with Vodafone). I have a number I can call where a real person can help me and after a recent minor glitch where one of my accounts became corrupted and needed to be recovered (one of a lucky 3 people in the entire world apparently), both Telstra and Microsoft’s performance in fixing the situation and keeping me informed was excellent.

Google Apps is pretty good, it is pretty reliable but its lack of true business support (no phone support, far too restrictive email limits and no options if the system locks down an account) means that, for now, I dont recommend it for business use. For a very small business or family able to work within its limits, it is great but in my opinion, it is still some way off being truly ready for business use.

I have also moved a client to HyperOffice with reasonable success although their reliance on IMAP for email gets pretty slow for users with multiple large accounts connected. Their business model is far less “self service” and they are there to help with a well integrated and executed system that is well suited to a widely dispersed workforce. It is pretty much all web interface driven which has its quirks as well. It is more expensive but their goal is to remove the need for IT staff and they are targeting a different market than Microsoft or Google.

De-Branding a Nokia E72

I have nothing against my mobile company, Three mobile in Australia but I wish they would update their handsets occasionally. I have a Three branded Nokia E72-2 and even though there has been at least two major software updates from Nokia, there are none for mine (Three no longer even sell this handset but still sell the older slower E71 instead). Updates are tied to the model number found under the battery which identifies the telco and regional and language settings (Three Australia is 0591918). I do not want to have to buy a generic handset at full price so, if I want the latest software update to fix a few reliability and stability issues, I had to de-brand my phone.

A word of caution, this will not only void your warranty but, if you have a problem, may kill your beloved mobile (known as “Bricking”).

You need the following:

  • The Nemesis Service Suite
  • The latest Nokia Updater
  • A generic E72-2 model ID (Australian Generic 0586213 does not get the latest update either, I used the generic North American Code, 0573646 as the North American handsets are the same E72-2’s supporting HSDPA 850MHz)
  • A full backup of your phone.
  • A USB cable
  • Tissues for when you brick your phone

Plug in your phone (do not select “PC Suite”, turn off PC Suite as well) (do not unplug it or turn off your PC until everything has completed!) and run NSS. Scan for your handset.

Click “Phone Info”

Put phone “Power Mode” into “Local Mode” in NSS

Then “Scan”.

Click the “Read” button to populate the settings boxes, change the ID to what you want, tick the “Enable” box and click “Write” to write it to the device. It takes effect immediately.

Put the phone back into normal mode, close NSS and run Nokia Software Updater. The latest software is a 200MB download.
After it has applied. I recommend a hard reset to make sure you have cleaned out all the telco stuff.

Use your mobile with all its new features after putting all your settings back in.

Mine starts in 1/3 the time as it doesnt have to play the Three video and audio introduction.

Posted in: Hardware

Microsoft Office 2010

I have just installed Microsoft Office 2010 on my work laptop. This may or may not have been a good idea, time will tell.

After attending a launch breakfast of Office 2010 a couple of days ago in Melbourne, it looked good enough that I had to give it a go, if only to be able to support my clients as they move up to it.

As I dont tend to use any add-ins and I run Windows 7 x64, I decided that Office 2010 64 bit would be the way of the future. If you have any add-ins, more than likely they wont run in 64 bit.

The first issue I had was that I found that I did in fact run an add-in, the Google Calendar Sync application to maintain my appointment calendar between my desktop, laptop and mobile phone. Google doesn’t support Office 2010 yet until the official public release (regardless of the fact that open/volume licenced businesses have had access to it for a month already) so I had to find an alternative. I did a quick Google search and found GSyncit, a cheap ($14.99USD) Outlook plugin that supports x64 Outlook 2010 for syncing calendar, contacts, tasks and notes with Google. Even better, it works.

First Impressions: The addition of the Office “Ribbon” to Outlook is a bit different but pretty good, grouping by conversation (like Gmail has had since day 1) is nice, the ability to clean up redundant messages in a conversation and ignore conversations is also useful. Powerpoint’s built in image and video tools and functions are a great improvement and the web publish feature is great for quick small presentation sharing in real time. Word looks pretty much the same, I would need to have Sharepoint available to take advantage of its multi-user simultaneous editing features (minimum 5 users @ $10USD/m each for Microsoft hosted Exchange and Sharepoint, could be worth it in the future when I start employing staff. It turns out that in Australia, the MS hosting is managed by Telstra (bad) but they allow a single user @$16.95AUD/m. Will give it a go).

Overall, it seems to be an improvement on Office 2007 but most likely, unless you are a power user or want the latest, there is probably no need to upgrade for the sake of it, you only use a fraction of any of its apps anyway. One feature which may be of benefit is OneNote is now standard across all versions of Office 2010. In Australia there are still (as of June 10th 2010) some Office 2007 Small Business edition retail boxes going very cheap (~$230AUD) that are eligible for a free upgrade to Office 2010 Professional which is the cheapest way to get it (cheaper than an update licence). Update June 13th, this software is nearly impossible to get now, looks like the word got out.

I will edit this post with more updates as I find out more about it, good and bad.

Update: Where did my auto complete addresses go??? It tuens out Outlook 2010 no longer uses the NK2 file that I have so diligently copied, backed up and restored over the years so none of my auto complete email addresses are there any more.

To import .nk2 files into Outlook 2010, follow these steps:

1. Make sure that the .nk2 file is in the following folder:
%appdata%\Roaming\Microsoft\Outlook

Note The .nk2 file must have the same name as your current Outlook 2010 profile. By default, the profile name is “Outlook.”

2. Click Start, and then click Run.

3. In the Open box, type outlook.exe /importnk2, and then click OK. This should import the .nk2 file into the Outlook 2010 profile.

All my auto complete email addresses are back now. Happy me.

Update: I just discovered that Outlook will send an email from whichever account you are in at the time regardless of your default settings… I have also signed up for Microsoft BPOS (Exchange Online) so have a full exchange server behind my Outlook instead of Google. It seems to work well, albeit difficult to set up. Will post about it specifically another time.

Mac

Apple make the only other mainstream desktop alternative to Windows (although Linux on the desktop is starting to make some inroads).

As an IT consultant, I have two Windows machines (Win 7 and XP), a Core 2 Duo Mac Mini with Snow Leopard and an Ubuntu Lucid machine as well as a VMWare machine with Windows, Linux and an Openfiler servers on it. I have also just picked up a 3 year old core duo macbook from a client after a hard drive failure, he didnt want it back.

For the end user, it is purely a personal preference issue. Ignoring the awesome efforts of the Apple marketing department to convince you otherwise, either Windows or Mac will do what you need. There is nothing that you can do on a Mac that cannot be done on Windows and vice-versa. If you use Windows at work and need to work from home, especially if you have some specific work requirements (an a work IT department that can offer some assistance if needed), moving to a Mac at home can make things a bit harder between locations.

You will pay anything from a little more to a lot more for the equivalent Mac system but you will get a very solid and stable desktop. It will also be prettier. Personally I really dislike the feel of the Mac keyboards, both the desktop and laptop ones but as I mentioned earlier, it is a personal preference issue, they work but I use a Microsoft keyboard and mouse with mine. You will have to spend extra to upgrade the warranty if you want a 3 year warranty on the hardware (recommended for business use) where proper business grade Windows desktops and latops should come with 3 years already (not all of them). Mac’s are, by design, more secure than Windows PC’s but (regardless of what the Apple marketing department says again), they are not immune to online threats. Failure to take similar precautions on a Mac that you need to take on a PC is not good idea, especially with the prevalence of cross platform vulnerablities such as some recent examples in Java, Javascript and PDF’s.

There tends to be more software available for the Windows platform, especially open source and free software. Mac users tend to pay for more extras more often. That said, a lot of the usual free software I use on Windows is also available for the Mac (Filezilla, Mozilla Firefox, Mozilla Thunderbird, The Gimp, VLC). The Plex media centre application is exceptional, it is clean and it works but doesnt support TV. Macs can be a bit less upgradable than the equivalent PC’s in certain configurations and they are not as easy to work on (try replacing the optical drive in a Macbook compared to a Dell Latitude and you will see what I mean. How to videos to open a Mac Mini include, credit cards, fish lifters and pizza cutters). That said, the hardware is reasonably high quality and matched to and tested with the Mac operating system (which is Unix based) so you should not have to search for drivers or run into too many compatibility issues. You can run Windows via Bootcamp or virtualisation if you need to. One big thing though, Macs and PC’s share many common components such as hard drives, RAM, CPU’s etc. Dont assume that your Mac is infallible, they die too, quite regularly I might add. Apple have made it easy to back up your Mac for a reason!

I dont yet think that Mac are fully business ready. Sure they will do most things but when I am looking for a suitable business platform, onsite support is vital. I dont want to have to take a warranty claim to a shop. Interraction with Windows domains is pretty good but still not completely smooth, Exchange email does not have full functionality on the Mac and many small to medium businesses run Windows domain based networks. Being locked into proprietary hardware configurations is also not ideal. Any business rolling out large numbers of machines will shop around for the best deal. With the hardware locked to Apple only, this is not possible with Macs compared to the myriad PC options.

One other issue you may face with Mac is the Mini Displayport. While Displayport and Mini Displayport are open standards (Apple helped fund Displayport development and have preferred the mini Displayport for their hardware), hardware support from third party vendors is still pretty weak. While this is fine if you want to spend a LOT on a decent large Mac Monitor, use of other screens is a little harder. You need to either spend an extra $45 AUD for a DVI or VGA adapter (a Mac one), or find a monitor with Displayport and use a Mini Displayport to Displayport adapter (not common yet) to use them natively. High end Dell screens support Displayport and Dell and HP business grade laptops and projectors have the full size Displayport options but unless you simply must have the (very beautiful and very expensive) Mac screens on your PC, few Windows users are going to pay the massive premium considering you can buy five 24″ 1920×1080 screens for less than the price of one 24″ 1920×1200 Mac screen.

The size of updates is significant. I recently had my Mac Mini notify me that some updates were ready. I was expecting a 100MB or so update, especially since my Mac was straight from the shop, you can imagine my surprise when 1.3GB of updates was required to bring my machine up to date. I thought the Windows Vista/Windows 2008 Server combined SP2 patch was excessive at just under 500MB. The bulk of this update was a minor version update for OSX from 10.6.2 to 10.6.3. Make sure you have decent internet speed and data available or turn off automatic updates!

Uninterruptible Power Supply (UPS)

Do you need a UPS? YES!

A UPS is basically an additional insurance policy for your electronic equipment and vital for maximum life of your hardware. It will not only stop most power spikes and surges from getting to your hardware. A UPS can also provide a battery powered backup in the event of a power brown-out or blackout for a period of time as well as voltage correction if the power supply voltage is too high or too low.

While most businesses already have a UPS protecting their critical equipment, few households do. Small but capable UPS’s are now cheap enough that there is basically no reason to not get one. An entry level UPS is now under $100AUD but I would recommend spending around $150-250AUD for a home PC system to give a bit more battery run time.

The big names in UPS’s are APC (Americal Power Conversion) and Eaton. Eaton consumer grade UPS’s are branded as “Powerware”. There are some other good brands such as Nikko but there are also some cheap and nasty ones that should be avoided.

Things to look for when deciding what to buy:

  • Easy battery replacment – Batteries are a consumable item and last 3-5 years, most UPS’s use readily available gel lead acid batteries
  • Compatible sockets – APC tend to use the universal IEC C13 connector which need a IEC cable to connect to a device or a converted to connect to a powerboard, Powerware use Australian standard sockets.
  • Connection (USB usually, network on commercial systems) to PC being protected – allows normal system shutdown when battery level gets critically low
  • Run time and load requirements – Run time at full load is only a few minutes, if you want more run time, buy a bigger unit. APC has an online run time calculator to calculate run time for a load for their range of UPS’s. Larger commercial grade UPS’s can have extra battery packs added for extra run time.
  • Online vs Line interractive – Most are line interractive (cuts over to battery if the supply fails) but some of these types do not like being run from a generator. If a generator backup is required (small petrol or diesel off the shelf unit, not purpose built), an online UPS (supply charges the internal battery and all output comes from the battery at all times) is generally better but also can be more expensive.

For extra surge protection, supply your UPS through a surge protected powerpoint or double adapter. Surge protection is cumulative, a single device may not be able to stop a big spike (>1000 Joules) but two or three (rated at 500+ Joules each) in line may be enough.

Openfiler

I have been playing around with Openfiler for the past few weeks. Basically Openfiler is an open source, customised Linux operating system specifically designed to be a file server, or more specifically, an “Open Source Storage Management Appliance” . It has far more functionality than simple file storage though, it can be an FTP server, a Network Attached Storage (NAS) server, and even an iSCSI SAN if you need one (I used it while testing a VMWare vSphere infrastructure system). It will run on pretty much anything x86/x64 based (min spec 1Ghz processor with 512MB RAM), can interface with a Windows domain and its web based interface is pretty simple to use so anyone looking for a simple and cost effective bulk storage solution should definitely have a look at it.

I had a test server (2x Xeon 2.8, 4GB RAM and 4x200GB SATA drives in RAID 5) to try it on. As I already had hardware RAID, I didnt need to implement software RAID but as it supports software RAID 0,1,5,6 or 10, I could have. One thing that caught me out was a limitation of 4 primary partitions on the drives. Apparently a normal implementation would have the Openfiler system (by default this is 4 partitions) on a single drive or array and it would be separate from the data storage. It is not recommended to have the OS and the data on the same disks as a restoration may be more challenging. As I already had a 4 disk RAID array ready to use, and this was for testing only I just installed to that and therefore I could not use any of my drives for data which kind of defeated the purpose. A reinstallation on the same array but with a manual partition creating an extended fourth partition rather than a primary gave me over 550GB of usage storage. Following the basic installation instructions, I found it relatively simple to create a usable NAS box. I did not add it to a Windows domain but think that it would actually be easier than having to configure the Openfiler device as its own LDAP server. FTP was also pretty easy to get up and running. You dont need to know any Linux at all, the initial installation is graphical (unless you want console) and after the initial installation, all configuration is done via a web browser.

Apparently if you plan to use it for production block level storage (iSCSI, SAN), you apparently should use a second network interface for management although in testing, I have not bothered and simply use it across my network with only minor performance issues. It is actually easier to set up as an iSCSI target than it was for NFS or FTP and is simple to connect to VMWare ESX (although I did need to reboot it after re-mapping LUN’s before ESX could connect to it as an iSCSI target even though it could see it). I also had no problems connecting my Windows 7 laptop to it either using the built in software iSCSI initiator with pretty good performance (30-50MB/s over gigabit ethernet)

To set up openfiler as an iSCSI SAN:
1a. Create Physical volume on a single disk OR
1b. Create RAID volumes on multiple disks and create array
2. Add volumes from #1 into a Volume Group
3. Create an iSCSI volume in the VG from #2
4. Start the “iSCSI target server” service
5. Add a network entry for the client machine (or local subnet if private) at the bottom of “System > Network Setup”
6. Click “iSCSI Targets” on the Volumes page
7. Click “Add” to create a new target.
8. Click “Lun Mapping”
9. Click “Map”
10. Click “Network ACL”
11. Change the combo box for your network to “Allow”.

I am not sure if I would roll this into production just yet but for a backup storage system, or bulk storage of non-critical files (I used to run a 250GB iTunes server at a music publishing company that could definitely benefit from this type of flexible storage), it could be very useful. There are purely commercial alternatives for production use such as Datacore SAN Melody but there is an active userbase of Openfiler which should be able to assist with and commercial support options are available if required.

Posted in: Free Software

Website Hosting

There are so many options available for you to get your website online that many people dont know where to start. I will run through a few (non-exhaustive) options for you, from limited and free to powerful (and expensive).

You can run a website from your home PC via your home internet connection. I wouldn’t do it though but it can be done and in the early years of the internet, many sites were run in exactly this way via dial up modems. I am not going to detail how as it is now so cheap and easy to organise proper web hosting that it makes no sense, for anyone.

Before you begin, you need to have your Domain Name registered and ready to use.

When you have your Domain Name ready to go, you should have a bit of a think about how you intend to use your web site. Do you imagine the web site scaling to handle very high numbers of users? Do you have a preferred platform (Windows or Linux are the two big options) or preferred programming language and database system that you want to use? Basically Linux hosting will always be cheaper than Windows but cannot be used for Dot.NET applications or SQL Server databases. It can be used for PHP programming and MySQL databases though which are very widely used online. Windows can also support PHP and MySQL but in my experience, shared hosting of PHP and MySQL on Windows hosts seems slower than on Linux, possibly due to licencing costs meaning more sites are hosted on each Windows server.

To begin with, you can host your website with pretty much anyone you like. Make sure they have an online reputation (www.hostsearch.com is a good place to start) and can support what you need (if you want to use a free Web Content Management System like Joomla, Drupal, WordPress, MediaWiki etc, it must support PHP and MySQL). This site costs around $6AUD/m to host on Linux servers with unlimited space and bandwidth (I havent fully explored the concept of ‘unlimited’ though) but I would probably be asked to leave or have my site restricted in some way if it became so popular that it affected the performance of the other sites hosted on the same server. Some hosts are ‘free’ but make their money with ads etc, others are quite expensive and offer Service Level Agreements (SLA’s) regarding server up time. I have found Australian hosting to be considerably more expensive than hosting offshore and the performance impact of being located in the US vs Australia is negligible. Once you have signed up, delegate your domain name to them (or point it to them if you host it elsewhere), upload your website to them and it will just go live on the internet.

If you have some specific hosting needs, the next step up is a dedicated hosted server, probably a Virtual Server in a data centre. A Virtual server is completely self contained but many of them share the same physical server (as average utilisation is always a small fraction of peak performance, this is a much better use of a physical server). You can treat this like your own server, you will be given full access to it as if it was your own (but if you stuff it up, you have to fix it too). A dedicated server can usually handle a larger volume of traffic and users than a shared server as well.

A dedicated physical server is your next step up, you can lease one from a hosting company or install your own into a data centre. The prices start to rise with this option with data centre space being expensive and increasingly scarce (In Melbourne anyway).

Above this is the redundant server farm with load balancing. The sky is the limit once you get here in terms of how much you could potentially spend. Google has spent billions of dollars on their infrastructure but any level of load balancing and fault tolerance does not come cheap. This level of hosting is out of my league and more often than not would just be overkill for most business web sites.

Regardless of who you host with, make sure you keep regular back ups of your website. If your hosting company goes under, you may need to get up and running with another host on very short notice and may not get a chance to pull down a copy of your website before it gets turned off.

Posted in: The Web

FileZilla

Filezilla is a fully featured free open source File Transfer Protocol (FTP) client for file transfers. It has cross platform support and is also my preferred FTP client for both Mac and Linux use.

While it usually works flawlessly, occasionally I have found remote servers that no matter what I try, they just wont work with Filezilla. In this case I use CoreFTP which is also free.

If you want to set up a FTP server, there is also FileZilla Server which is also free and simple to configure and use. It has a similar feel to Bullet Proof FTP Server but doesnt have the cost. It also works very well.

Filezilla is available from http://filezilla-project.org/

Filezilla user interface

Posted in: Free Software
Tags: Tags: , ,

TrueCrypt – File Encryption

If you have personal or sensitive business information, especially on a portable device such as a laptop, USB stick or portable hard drive, you should consider encrypting this data. If you lose any of these devices, any un-encrypted data on them can quickly and easily fall into the wrong hands. One solution is a free and relatively easy to use encryption utility called TrueCrypt which can encrypt to some of the most secure levels of encryption available.

You have a couple of encryption options from complete system encrytion (fully secured laptop), an entire physical drive or the more simple and user friendly encrypted virtual hard drive which is simply a file on your device that appears as a hard drive when you put in your password. You decide what data will be stored in the encrypted file. It is not as secure as a fully encrypted system but is a far better option than nothing at all and will take an enormous amount of effort to decrypt without the right password. I generally have an encrypted volume on any portable device and any personal or important information sits in that. I also keep the TrueCrypt installer on an unencrypted part of the drive so I can install it if required (I also use a portable version that does not need to be installed)

Windows Vista and Windows 7 high end versions (Ultimate) have BitLocker encryption built in if you want to encrypt your laptop. While this is built in, you have to have bought the expensive OS’s and the encrypted data is not as flexible. With a TrueCrypt “Volume”, it can be mounted on pretty much any operating system (Including Linux and Mac) and can be put on a USB stick or portable hard drive which makes it portable. It cannot be read unless the right password is used.

With encryption though, the end user is the most likely weak point.

TrueCrypt can be downloaded from http://www.truecrypt.org/

Posted in: Free Software, Security
Tags: Tags: , ,
Comments Off on TrueCrypt – File Encryption

Domain Names

What you need to know

Your domain name is the online identity for your whole business, most importantly your email and website. For example, I currently use this one, abitofit.com.au, a variation on it to stop anyone else taking it, abitofit.net.au, and my personal name benchapman.com, which I registered many years ago (late 1990’s – you would be lucky to get your name these days, especially the .com).

You probably have a domain name already but if not and you want to register one, there are a few things you need to do.

  1. Decide what domain name you want and see if it is available.

    To check if a domain name is taken, the first thing to do is try www.domainname.com (or .com.au or .net etc) in your web browser. This will tell you pretty quickly if it is in use. It won’t always show you if it is taken though, as many names are registered and never used. Reasons for this include stopping someone else using it, or hoping to sell it later to someone who really wants the name (as they can be bought and sold like any other asset. One of the highest, if not the highest price paid was $12M USD for sex.com. Australian domain names are not worth anywhere near that.).

    To do a search for an inactive registered domain name, you need to do a WHOIS Search. There are plenty of options to choose from (eg www.whois.com.au or mywebname.com.au). If the search does not return a match then there is a good chance it can be registered. You may also like to review the list of names soon to be purged from the registry, at the official domain drop list. Before you do, think about what you want the name to mean to web users. Make it relevant to your business or name (.com.au require an ABN to register), make it easy to type and remember if possible (most three letter acronyms are already taken, don’t make it too long if possible). Try to avoid unwanted words when you join words together (eg a couple of famous joined word domain name blunders are Pen Island, Experts Exchange which has since been hyphenated, Powergen Italia, Therapist Finder, the list goes on. For a laugh, you might like to read). Check for conflicts with other registered business names and trademarks as well (for Australian businesses you could begin by checking the Australian Business Register.

  2. Register your new domain name.

    To register a domain name, you need to find a domain registrar. There are plenty to choose from. Don’t be too concerned about which registrar you use in Australia, only accredited registrars may sell a .com.au. The list of accredited registrars can be found on the auda.org.au website and they can also (usually) register other domains as well such as .com, .co.nz etc. You can change your registrar later if you really want to but it is much more difficult than registering the name in the first place. You will receive a domain name password or key when you register your domain name, KEEP THIS SAFE! It is the key – if you lose control of it, you can lose your domain name. Without it, you will be unable to make changes.

  3. Delegate your domain name to a Domain Name Server (DNS).

    Just registering is only the first step. You then need to decide which name servers are responsible for looking after your domain name on the internet. Often your web site hosts provide this service as part of your hosting package and this is usually easier than managing it yourself as they know what they need to make your web site appear on the internet. After registering, you need to enter the DNS settings of the name servers that will look after your domain name with your registrar. They usually have a web page where you can do this that you will be given details of when you register. The DNS server must be ready to receive your information before you put in the details with your registrar so you will need to find some website hosting first. If you want to look after the domain name yourself you will need to know what you are doing in step 4 below.

  4. Set up your DNS settings for your web site and email and anything else you need it to do.

    If your web site host will look after your domain name for you then you can skip this section. They will set up your web site and email. If you are doing it yourself, you need to know about IP Addresses, MX Records, A Records, CNAME records and subdomains. You also need a DNS host that has a web interface for you to manage the records.

    1. IP Addresses are the numbers which correspond to an address on the internet. DNS Servers point domain names to the IP addresses. The IP address is the location on the internet, but a domain name is easier to remember and use. An IP Address is a sequence of 4 numbers between 0 and 255 (roughly) separated by a period (.), eg this site’s IP address is 65.60.54.210 which is the address of the server hosting the site.
    2. MX Records are the servers that are responsible for your domain’s email. There should always be at least two (primary and secondary MX records) and there are often more (tertiary MX records). These are simply the mail server’s IP addresses and a number called a Metric determining the order that other mail servers should use to try to deliver email (lower comes first). I use Google Apps for business for my email so I have 7 MX records corresponding to different Google servers able to receive email on my behalf.
    3. “A” Records are basically the same as MX Records but are not for email, rather internet addresses. Your website will have an IP Address or a server address that you will want your domain name to point to, that is an A Record. I have the A Record “abitofit.com.au” pointing to 65.60.54.210.
    4. A “CNAME” Record is an alias for an existing A Record, eg I have the A Record “abitofit.com.au” pointing to 65.60.54.210 and a CNAME alias for “www” pointing to “abitofit.com.au” so an end user can type in “abitofit.com.au” or “www.abitofit.com.au” and both will go to exactly the same web site.
    5. A subdomain allows you to use different prefixes for your domain name for different things. Often a subdomain like “mail” or “mx1” (rather than the more familiar “www”) will be set up as an A Record to point your MX records to. I have mail.abitofit.com.au set up to point to Google Apps for business which then redirects me to the webmail interface for my email. You can set up as many as you need for different purposes.

    If your web site host is looking after your DNS for you and you want to change your email host (for example I use Google), then you need to give them the details they need to make the changes on your behalf.

  5. Wait for the changes to propagate then test your domain name.

    The internet is huge and while things happen pretty fast, some things still take time. One of these things is the propagation of your domain name to all DNS servers around the world (there are literally thousands of DNS servers controlled from central core of 13 “Root Servers” ). While the general rule of thumb is to allow up to 48 hours for worldwide propagation (it is not in real time, all servers check for updates periodically), in reality pretty much all servers will be updated within 12 hours and in Australia alone, I would be surprised if it took more than 2-4 hours to spread across the country.

    To test your domain name set up (whether or not you have set up your web site or email yet), you can use a tool built in to all operating systems called “Ping” as a quick check. In Windows, open a command prompt (Start- Run – “cmd” then enter) and type in “ping www.yourdomainname.com” and press enter. If you receive a message “ping request could not find host www.yourdomainname.com” then it is either incorrectly configured or has not propagated yet. The name should resolve to an IP address eg “pinging abitofit.com.au [65.60.54.210] with 32 bytes of data”. If you get nothing back (request timed out) don’t worry, it is the name resolution that is important here.

Posted in: The Web