Information Technology

Should your company implement a bring your own device policy?

There is a lot of talk within IT circles about "bring your own device" (BYOD) policies. That is, the concept that employees can bring their own smartphone, tablet, or even laptop, and perform their work duties on it. There are a lot of benefits for both the user and the business. 

For the user, he/she can become better accustomed to the same environment for home and work, making configuration changes that may greatly improve their efficiency. 
They can utilize the shiny new device that they waited in line to purchase, all without submitting a requisition, or justifying the purchase to anyone other than themselves.
For the business, it is one less piece of hardware to budget, requisition, purchase, depreciate, replace, and dispose of.
However, the business still must manage the computer. Or more specifically, a business must still manage how the BYOD device accesses the company's data, including email and calendar information. In fact, many businesses don't have a choice, due to PCI and/or HIPPA regulations.
There are a few key components of managing a BYOD device that make it possible at all to utilize them in business.
  1. The most sensitive data must be on a Virtual Desktop, such as Microsoft Terminal Services, VMWare View, and Citrix XenDesktop. These allow you to have a cloud base workspace for your sensitive data, so that it never actually resides on the employee's device. Controls can then be placed so that the data is restricted to the user based on time and location.
  2. A security policy needs to be defined for BYO devices, including conforming them to antivirus and password requirements, of course taking into account the type of device itself. For instance, it is unnecessary to put antivirus software on an iOS-based device such as the iPad.
  3. A policy needs to be considered that allows remote wiping of an employee's computer. This way, the company can be certain that their data is safe, even if the computer is lost or stolen. This policy should also determine what happens in the event of termination, so that it is clear to the employee what risks they have.
  4. A backup policy needs to be in place for any company data that is located on the device. It should be made clear that the user is responsible for maintaining good backups of their personal data.
More and more companies are using BYOD policies to lower the overhead of their business. Is it time to consider it for yours?

When to use a wired network

Wireless networks are nearly ubiquitous. They're reasonably fast, convenient, secure (provided you aren't using WEP encryption), and frankly many devices no longer have a wired ethernet port. However, it is often best to use a wired network connection whenever available and convenient.

It should go without saying that any sort of file server should be wired, but there are other good uses.

Ethernet

For instance, a desk should always have a wired connection, even if the user is on a portable computer. This allows for greater speed, and less interference for those that are on the wireless network.

A device used for a presentation at a conference should use a wired network. This might mean purchasing a USB ethernet adapter if the presenter's device will support it. However, a wired network is impossible if the presentation is coming off of an iPad. Instead, there should be a wireless base station with a private network situated as close to the presenter as possible, so as to drown out other wifi signals that may interfere with the presentation.

Any sort of video streaming generally will require a wired connection. It's not imperative, but if there is a lot of other traffic on the wireless, there may be issues. It's best to just wire the connection and eliminate the possibility.

The trend by now should be obvious: if you need your transmission to go through, despite what everyone else is doing- go wired.

- Jon Thompson

File formats matter

What do .csv, .xls, .xlsx, and .ods all have in common?

They're all different ways to represent spreadsheet information. However, they're not all created equal.

I recently had a client that was working on a large analysis piece. He downloaded a data set in a CSV format, and proceeded to add multiple tabs of equations into it over the course of many, many hours. The entire time, he thought that autosave was storing these equations.

Excel crashed.

Pressure...

He then opens his (still csv) spreadsheet. All of the tabs are missing. He goes to the recovery files. All of the tabs are missing. He goes to his backup. All of the tabs are missing. It is like they never existed in the first place.

The problem is that csv is a very simple format. It doesn't support multiple tabs. It doesn't support much of anything. However, it is really easy to write, really efficient in storage space, and able to be imported into virtually any spreadsheet or database program, so it is a popular format for data sets.

Excel translated this file internally into it's own formats (.xsl or .xslx) which do support these formats. However, the autosave down-converted it back to csv, stripping all of the analysis out of the file.

All of this would have been avoided if he would have initially saved a verson as an .xslx format file, then made the changes.

C'est la Vie.

Jon Thompson

Keep your software current

Often, I come across a business that is woefully out of date with their software. This is an IT management nightmare. What usually happens is that they use whatever software came with the computer at the time, and do not invest in standardizing their software. This creates technical difficulties in workflows, as well as differences in the interfaces and features between the same software on two computers in an office.

There is a cost associated- there are upgrade license fees associated with major upgrades, as well as training/reduced efficiency as users are learning new software. However, it is an eventuality. Computers need to be replaced on a schedule. Newer computers often won't support older software. Therefore, software needs to be updated so that the versions remain constant between computers.

Stress Reduction

Let's use the relatively ubiqoutous Microsoft Office as an example. In Office 2007/2008, Microsoft implemented a very different series of XML-based (extensible markup language) file formats: docx, xslx, and pptx. The advantages are significant- smaller files, vastly increased stability, richer content, interoperability with non Microsoft software.

The drawback is that it is not backwards compatible with older versions of Office(with the exception of Office 2003/2004, which has an update available to support the version). This has resulted in many companies continuing to standardize on the older, larger, bug ridden format. I do realizing that interoperability between companies is an issue with the file format too, but I argue that this is an independant, temporary issue, which can be worked around by downconverting documents as needed, rather than mandating the older format across the board.

Next, the interface and feature set often changes between versions. This is most obvious again in Microsoft Office. In Office 2007/2011, Microsoft introduced the Fluent user interface (the ribbon). If a business has both Fluent-based versions of Office and older, it becomes difficult for users to collaborate and share information on how to perform tasks within Office.

Office is not the only software that needs to be maintained. Windows, Mac OS X, and Adobe Creative Suite all have interoperability issues between versions. The worst, however, is web browsers. Modern web browsers adhere to standards much better than older ones, yet there is a large amount of older browsers still active today.

Get them updated.

- Jon Thompson

How secure is your password?

There has been a lot of discussion as of late about passwords, and in particular what makes a good one. I'll start with my previous view of what makes a good password for end users. It's probably similar to password policies that most readers are familiar with:

  • Minimum of six characters
  • A lowercase letter
  • An uppercase letter
  • A number
  • A special character (like !&@^)
  • Can't be the same as the username

Assuming random combinations this results in 8.4E73 (84,000 with 70 additional zeroes after it) combinations. If we assume one thousand guesses per second by a computer, we're talking about trillions of years to guess the password. Your password sounds secure.

I've never been a fan of frequent forced resets of passwords, primarily because the password is more likely to be written down, which immediately eliminates any sort of security that a password provides, as it can be copied without the user's knowledge. The alternate is that the user will create a variation of their previous password, which is easily defeatable if the previous password is already compromised.

The problem is that we don't remember random combinations well, so often we'll rely on a word to allow us to remember our password. Looking at a password dictionary results in about 1.7 million common passwords, add in another 170,000 standard words in the Oxford English Dictionary, and you have 2 million passwords. Figuring every word has 100 variations such as 's' being replaced by '5' and there are still 200 million easily calculated passwords. That's about 55 hours of a computer guessing at 1,000 guesses per second.

Your word-based password is insecure.

There is a lot of talk about not having dictionary words. This does two things:

First, it does make the password much much harder to crack, as we're back up to 8.4E73 combinations. Second, it makes the password impossible to remember, requiring the use of password managers, such as 1Password.

In light of this, I've started to talk with clients about a new policy:

xkcd: Password StrengthImage via xkcd

  • Minimum of 15 characters
  • Minimum of four words
  • Easy to remember

Having four easily remembered words results in a 8.5E20 different word combinations. We're back to talking about trillions of years to guess the password. That's assuming that there aren't upper case, special characters, or numbers. We're also able to remember four words.

The workflow I suggest for long passwords is to think of four words that aren't initially related and put them together in a sentence. There are tools online to help you create the word list

Once you have a long password, you need to practice it several times before you actually change it. This helps commit it to memory. Open a text editor, turn off your monitor, and type your password and hit return. Do this five times then turn on your monitor. If they're all the same, change your password. If you have variation, repeat the process until there isn't a variation.

Now you have a secure password. Provided you don't write it down or tell it to someone.

- Jon Thompson

The three different types of server hosting

One of the most misconstrued aspects of computing is what exactly a server is. A server is nothing but another computer that is offering services (such as a website) that is accessed by one or more other computers.

With that said, there are three ways to host a server- self-hosted, hosted, and co-located. In reality, these have many options within each of them, but these three serve as the basis.Clouds over Pyrgos

Self Hosting

A self-hosted server means that you have it internally within your office. For instance, many organizations will have a file server that they utilize to share files between individuals, as well as consolidate data.

The benefits of this method are internal speed and control. The drawbacks are cost, in both labor and bandwidth, and reliability.

An internal office network is generally much faster than the internet bandwidth to the same office. If we look at a simple file transfer of a file between two desks in an office, it will take 1/10 the time to copy the file internally, rather than uploading the file externally, then downloading the file to the second computer. The file also never leaves the office, meaning there is a greater level of control over the access that is allowed to the data.

However, A self-hosted server must be routinely maintained, otherwise there is a risk of data loss or a security breach. Since a self-hosted server is often one of a few within a business, the cost of the maintenance is often higher than the cost of externally hosted options. The other aspect is that a small business often does not have the resources to offer redundancy at the level offered by other solutions.

Co-Located

A co-located server is one that a business sends to a host, usually a data center, where it will reside while it provides the services that are needed. At the data center, the server will have redundancies in power sources (often to the power plant level) and internet connections, as well as efficiency of scale for the facility and cooling.

However, a co-located server must still be maintained by the business, with the added caveat of fees associated if the host needs to perform actions on the device. The co-located server must be accessed through a business' internet connection, which is often a bottleneck in terms of speed.

Hosted

Hosted servers can take many forms, but the basis is that the hosting company provides and manages the hardware that the business' data is residing.

Because the hosting provider is providing this service to other companies as well, their management of each business' service is a fraction of the cost that the business would incur for the same performance.

The drawback is that the business' data is now on another business' hardware. Depending on how this is handled, and what data is hosted, the business may be in violation of HIPAA, PCI, or Sarbanes-Oxley. Issues may also arise due to other businesses being hosted on the same hardware.

Hosting companies are aware of these drawbacks, and have products to eliminate them. I'll discuss them in a future article.

- Jon Thompson

Email size limits

Most email administrators require attachments to be below a certain size. Unfortunately, for the user, this can cause all sorts of headache when they need to send a document that is larger than this. For instance, Google has a standard 25 megabyte (MB) limit, which allows the vast majority of emails, but blocks the most problematic. I've used 25 MB as a limit before.

An Example

Without these limits, email would be virtually unmanageable for an organization of any Screen Shot 2011-10-28 at 10.52.44 AM size. Here's an extreme example of a server without limits, and what can go wrong:

A user sent a very large, 1024 MB (1 gigabyte (GB)) video file via email to eight people, four of which were other users within the email system, four of which were external to four organizations.

For the four internal emails, they all got delivered. However, the email is duplicated four times on the server, then cached on each of their personal devices. So the 1 GB email now is taking at least 8 GB of your organizations resources, plus it took the bandwidth to download the email, restricting email access 

For the four external emails, two of them got blocked, as the external server had a restriction that blocked the delivery of the email.

Two of them got delivered, but it required your mail service to use bandwidth to each of their servers to deliver the file, then at least 2 GB of each of the external service's storage space, and the bandwidth to deliver to their devices.

All in all, the one email used at least 12 GB of storage space, 8 GB of bandwidth, and wasn't delivered to two of the eight recipients. This is also ignoring document retention policies, which means that this space could easily be stored for years after the original email has been deleted.

Scaling this to entire organziations result in problems managing storage space and bandwidth, as both the number of users, and often recipients per email, goes up.

Alternatives

A very simple solution is to email a link to a shared resource. This could be a link to a web server, file server, sharepoint, or other collaboration space that both the sender and recipient has access to.

Many media companies will maintain an FTP (File Transfer Protocol) server, that allows external clients to share very large files with them. They're used to working with large files, so this provides an industry standard, yet antiquated, way of dealing with this problem.

Dropbox has also become a very popular resource for sharing files between small groups, as it allows the contents folder to be shared between devices without any real work done by the user.

If the amount of data is large, time is short, and you are geograpically close, there's usually the possibility of handing someone a thumb drive, too.

Of course, this is all assuming that the data isn't confidential. Since we started with email, which is unsecure by nature, this is a good assumption. Email security is another article.

- Jon Thompson

 

Backup strategies to prevent data loss

My previous article described different factors that result in lost data. It also said that I would discuss backup strategies. In reality, there is only one backup strategy:

Redundancy

Redundant Sign is Redundant courtesy MENE TEKEL via FlickrHaving multiple copies of your data is the only way to protect it. Of course, there are many ways to make data redundant, depending on how fast a business needs to be back online, and how much money they have to spend. 

  • External hard drives are cheaper and larger every year, and basic backup software is built into every modern Operating System. Recovery can take anywhere from hours to weeks, depending on the problem and what it takes to recover.
  • Cloud backup can be had for very little per month.
  • RAID (Redundant Array of Inexpensive Disks) arrays will keep your server running even with one or more failed disks, but will only protect against hard drive failure.
  • Virtualization makes it possible for your server hardware to fail without any downtime at all, provided you have enough hardware.
  • At the top end of the scale, many larger companies have duplicate data centers in far away geographic locations ready to go in the event that their primary one is unavailable.

Of course hosting services have reduced the cost of all of these things significantly, but there is a bandwidth bottleneck that restricts many services.

There are two other strategies that people think of backup as well. However, they are false security, as they both have fallacies unless treated properly.

Synchronization 

When data is synchronized between devices, using technologies such as DropBox, Windows Profile Sync, and Mac OS X’s Portable Home Directories, it appears to be redundant. The fallacy in this is that when data flows two ways, it is easily deleted in either, or both, directions. 

  • Corruption could  cause an empty folder to sync with all computers where there was once vital data. 
  • The wrong timestamp is placed on a file, and data is reverted to an older version. 
  • There are now two entry points for malware, as it can flow in either direction.

That’s not to say that synchronization doesn’t have its purpose. It does. Dropbox is ubiquitous with the tech startup community. Document synchronization allows enterprise users to have the ability to take portables out of their offices, or even have a desktop and a portable. And our mobile devices would be much more work if our contacts, calendars, and email weren’t synchronized between devices.

Archival

I always cringe when a client tells me that they have a backup of their data, then produce a single external drive with their previous year of data on it. When I ask where the second copy of their data is, they shrug. When data is archived, it must still be redundant. 

Another problem with archiving is media lifespans. Technology changes. Media deteriorates. Without a plan to migrate archived data to new media every few years, a business may not be able to access the data at all.

One extreme example is when a client brought me data that was written on reel to reel tape in the 70s. I was asked to recover the data off of it. It took six months to find a tape machine and computer that was old enough to read the data at all.

I was astounded that it was found at all. 

-- Jon Thompson

Steve Jobs 1955-2011

612px-Steve_Jobs_Headshot_2010-CROP Without Steve, we'd all be using mainframe computers that fit in small rooms, rather than on our personal computers under our desks.

Without Steve, we'd all be typing commands into a prompt, rather than clicking and dragging icons around a window.

Without Steve, we wouldn't be able to watch video on our mobile phones.

Without Steve, we wouldn't be able to pass our tablets back and forth, rather than sliding our laptops.

Today, we lost the quintessential Henry Ford of our time.

Thank you, Steve. You didn't distort our reality, you changed it.

- Jon Thompson

What makes your data vulnerable?

It's a very difficult situation when I must tell a new client that they have to spend a minimum of $1,500 just to get to the point they were at hours ago. That's for a seven day turn around. Next day service starts at $12,000. Not new equipment, just a business' data recovery. Finally, that's assuming that the wizards at my favorite data recovery center can actually recover the data, which isn't guaranteed. 

While the equipment that is used to access data is easily replaced, often the data itself is not. Furthermore, a business' reputation is often at stake. Needless to say, a critical part of your IT services is to plan for data loss. Let's look at the largest causes of data loss, along with the data backup philosophy that fits best with minimizing it:

Intentional and unintentional actions:

An end user accidentally deletes a file. Another forgets to save it in the first place. These are intentional and unintentional actions. Not malicious, but damaging all the same.

This is why applications such as Microsoft Office Suite, and now even Operating Systems, such as Apple's OS X Lion, offer automatic save technologies. Even with such technology, it is still easy to accidentally delete a file. Because of this, I am a fan of self-service local backups for end users. This way, they can recover from accidental deletions without needing to involve IT staff.

Disaster:20080613_Des_Moines_River

In the midwest, our primary concerns are fire, flood, and tornados. Other places might add hurricanes, earthquakes, and volcanoes. 

An offsite backup is required for this type of data loss. Many small businesses rely on the "take a backup home" method to provide protection. However, they are still taking a risk that their home will be affected by the same disaster as their office. With the onset of cloud backups, it is possible to obtain inexpensive offsite backup that is thousands of miles away, allowing the greatest protection against disaster.

Crime:

Theft, viruses, unauthorized intrusions all fit within this category. It's also the sad state of our world that we must acknowledge possible terrorism, and how to respond. 

Like disasters, offsite backups are key, but another variable is also important. Offline backups. It's important that there is at least one copy of the data that the intruder cannot modify. This is especially important with cloud-based backup services, because they are essentially online all the time. To account for this, most services keep multiple copies of all files, and make it difficult to delete files and impossible to modify them.

Failures:

This is the largest category with hardware failures, such as hard drive crashes, power failures and overloads, and data corruption. Cloud computing can have a business failure, which is where the service you are utilizing closes their doors, or changes their licensing in a way that is incompatible with your business.

Hardware failures often are dealt with by building in redundancy. Since hard drives are both a high point of failure, and inexpensive, IT staff will often place RAID (Redundant Array of Inexpensive Disks) arrays on servers, where downtime often means thousands of dollars of lost work. Virtualization, which I will cover in a later article, allows redundancy in actual computers, even further reducing the possibility of data loss, and even reducing the possibility of downtime.

Power failures and overloads are often accounted for with UPSs (uninterruptible power supplies), which perform two functions. First, they keep your equipment running during short power events. The other function they perform is that they notify the computer equipment of impending power failure so that the equipment can power down gracefully, reducing the possibility of data corruption.

Finally, the advent of cloud computing has amplified the ability for business failures to affect a company's data. Don't think of this as just a cloud business failing, but rather anything that could go wrong with an external service. Therefore, it is important to include the cloud service in your backup plans, or provide local redundancy if the cloud is the backup plan.

Conclusion:

In a future article, we'll further discuss the strategies used, and how to balance them with the costs involved.

- Jon Thompson

A quick guide to troubleshooting network connectivity

Greetings, my name is Jon Thompson, and I am exited to write about Information Technology for Iowabiz.com. I’ve been in the IT business for coming on fifteen years now, and am exited to share my knowledge with the IowaBiz.com community. My business, Evolve, works solely with Apple Macintosh and iOS (for those that don’t know the lingo- that’s iPad, iPod Touch and iPhone.) However, my experiences transcend particular platforms, and I’ll always make sure that my posts are informative toward the entire business computing ecosystem.

Caterham WiringImage via Wikipedia

As an IT consultant, I am often diagnosing network connectivity, which is increasingly vital as services continue to move into the cloud. Enterprise network downtime costs have been discussed for years now, and has even been extrapolated down to SMBs. However, cloud computing itself comes with its own risks.

A user following a few standard steps can determine whether a connectivity issue is internal to their business, or something that is beyond their control.

 

 

    1. Start by identifying potential problems on your own computer and work toward the cloud

Check to see if another device is having problems as well. If they are not, chances are there is a problem with your computer, rather than the network.

    2. Check your network connection. 

A computer isn’t able to communicate if it doesn’t have some sort of wireless or wired signal.

    3. Check the IP address of your local router.

Generally these have a web interface and the IP address is easily locatable in the network settings. Look for a number with the word “router” or “gateway” beside it that looks like either 192.168.x.x or 10.x.x.x. The number might be different, but these two formats are very common.

    4. Check the link to the ISP.

The modem and/or router will have a light that indicates whether the device is actually talking with your ISP. If it is not:

    5. Check the connections on the back of the router.

DSL will have a phone connection; Cable will have a coax connection.  Make sure they are connected to the proper location on the wall. 

    6. Check DNS. 

DNS is the system that translates IP addresses into names, such as iowabiz.com. When it isn’t working properly, it will feel like a broken Internet connection. To troubleshoot, enter 209.85.225.104 into the address bar of your web browser. Google should appear. If it doesn’t, you have a DNS problem.   

    7. Contact the Cloud service provider.

Chances are that the issue that you are facing is with the cloud service provider at this point.

By having a basic understanding of the network troubleshooting workflow, one can speed up the time that it takes to work with IT and minimize downtime.

- Jon Thompson

Enhanced by Zemanta

This site is intended for informational and conversational purposes, not to provide specific legal, investment, or tax advice.  Articles and opinions posted here are those of the author(s). Links to and from other sites are for informational purposes and are not an endorsement by this site’s sponsor.