Saturday, January 30, 2010

storage architecture depends on Your Environment

Which storage architecture is appropriate for you depends heavily on your environment. What is more important to you: cost, complexity, flexibility, or raw throughput? Communicate your storage requirements to both NAS and SAN vendors and see what numbers they come up with for cost. If the cost and complexity of a SAN isn't completely out of the question, I'd recommend you benchmark both.

Make sure to solicit the help of each vendor during the benchmark. Proper configuration of each system is essential ot proper performance, and you probably will not get it right on the first try. Have the vendors install the rest NSA or SAN - even if you have to pay for it. It will be worth the money, especially if you've never confiured one before.

Monday, January 25, 2010

Pros and Cons of SANs

Many people swear by SANs and would never consider using NAS; they are aware that SANs are expensive and represent cutting edge technology. They are willing to live with these downsides in order to experience the advantages they feel only SANs can offer. The following is a summary of these advantages:

SANs can serve raw devices

Neither NFS nor CIFS can serve raw devices via the network; they can only serve files. If your application requires access to a raw device, NAS is simply not an option.

SAN are more flexible
What some see as complexity, others see as flexibility. They like the features available with the filesystem or volume manager that they have purchased, and those features aren't available with NAS. While NFS and CIFS have been around for several years, the filesystem technology that the filer is using is often new, especially when compared to ufs, NTFS, or vxfs.

SANs can be faster

As discussed above, there are applications where SANs will be faster. If your application requires sustained throughput greater than what is available from the fastest filer, your only alternative si a SAN.

SAN are easier to backup
The throughput possible with a SAN makes large-scale backup and recovery much easier. In fact, large NAS environments take advantage of SAN technology in order to share a tape library and perform LAN-less backups.

SANs are also not without their foibles: The following list contains the difficulties many people have with SAN technology:

SANs are often more hype than reality
Perhaps is a few years, the vendors will have agreed upon an appropriate standard, and SAN management software will do everything it's supposed to do, with SAN equipment that's completely interoperable. I sure hope this happens

SANs are complex
The concepts of Fibre Channel, arbitrated loop, fabric login, and device virtualization aren't always esay to grasp. The concepts of NFS and CIFs seem much simpler in comparison.

SANs are expensive
Although they are getting less expensive every day, a Fibre Channel HBA still costs much more than a standard Ethernet NIC, It's simply a matter of economics of scale. More people need Ethernet than need Fibre Channel.

Tuesday, January 19, 2010

The Pros and Cons of NAS

NAS filers have become popular with many people for many reasons. The following is a summary of several:

Filers are fast enough for many applications
Many would argue that SANs are simply more powerful than NAS. Some would argue that NFS and CIFS running on top of TCP/IP creates more overhead on the client than SCSI-3 running on top of Fibre Channel. This would mean that a single host could sustain more throughput to a SAN-based disk than a NAS-based disk. While this may be true on very high-end servers, most real world applications require much less throughput than the maximum available throughput of a filer.

NAS offers multi host filesystem access
A downside of SANs is that, while they, do offer multihost access to devices, most applications want multihost access to files. If you want the systems connected to a SAN to read and write to the same file, you need a SAN or cluster based filesystem. Such filesystems are starting to become available, but they are usually expensive and are relatively new technologies. DIlers, on the other hand, offer multihost access to files using technology that has existed since 1984.

NAS is easier to understand
Some people are concerned that they don't understand Fibre Channel and certainly don't understand fabric-based SANs. To these people, SANs represent a significant learning curve, whereas NAS doesn't. With NAS, all that's needed ot implement a filer is to read the manual provided by the NSA vendor, which is usually rather brief; it doesn't need to belonger. With Fibre Channel, you first need to read about and understand it, and then read the HBA manual, the switch manual, and the manuals that come with any SAN management software

Filers are easier to maintain
No one who has managed both a SAN and NAS will argue with this statement. SANs are composed of pieces of hardware from many vendors, including the HBA, the HBA, the switch or hub, and the disk arrays. Each vendor is new to an environment that hasn't previously used a SAN. In comparison, filers allow the use of your existing network infrastructure. The only new vendor you need is the manufacturer of the filer itself. SANs have a larger number of components that can fail, fewer tools to troubleshoot these failures, and more possibilities of finger pointing, All in all, a NSA-based network is easier to maintain.

Filers are much cheaper
Since filers allow you to leverage your existing network ifrastructure, they are usually cheaper to implement than a SAN, A SAN requires the purchse of a Fibre Channel HBA to support each host that;s connected to the SAN, a port on a hub or switch to support each host, one or more disk arrays, and the appropriate cables to connect all this together. Even if you choose to install a separate LAN for your NAS traffic, the required components are still cheaper than their SAN counterparts.

Filers are easy to protect against failure
While not all NAS vendors offers this option, some filers can automatically replicate their filesystems to another filer at another location. This can be done using a very low bandwidth network connection. While this can be accomplished with a SAN by purchasing one of several third-party packages, the functionality os built right into some filers and is therefore less expensive and more reliable.

Filers are here and now
Many people have criticized SANs for being more hype than reality. Too many vendor's systems are incompatible, and too many software pieces are just now being released. Many vendors are still fighting over the Fibre Channel standard. While there are many successfully implemented SANs today, there are many that aren't successful. If you connect equipment from the wrong vendors, things just won't work. In comparison, filers are completely interoperable, and the standards upon which they are based have been around for years.

Fillers aren't without limitations. Here's a list of the limitations that exist as of this writing. Whether or not they still exist is left as an exercise for the readers

Fillers can be difficult to back up to tape
Although the snapshot and off-site replication software offered by some NAS vendors offers some wonderful recovery possibilities that are rather difficult to achieve with a SAN, filers must still be backed up to tape at some point, and backing up a filer to tape can be a challenge. One of the reasons is that performing a full backup to tape will typically task an I/O system much more than any other application This means that backing up a really large filer to tape will create quite a load on the system. Although many filers have significantly improved the backup and recovery speeds, SANs are still faster when it comes to raw throughput to tape.

Filers can't do image-level backup of NAS
To date, all backup and recovery options for fillers are file-based, which means the backup and recovery software is traversing the filesystem just as you do. There are a few applications that create millions of small files. Restoring millions of small files is perhaps the most difficult task a backup and recovery system will perform. More time is spent creating the inode than actually restoring the data, which is why most major backup/recovery software vendors have created software that can back up filesystems via the raw device - while maintaining file-level recoverability. Unfortunately, today's filers don't have a solution for this problem.

The upper limit is lower than a SAN
Although it's arguable that most applications will never task a filer beyond its ability to transfer data, it's important to mention that theoretically a SAN should be able to transfer more data than NAS. If your application requires incredible amounts, NAS offers a faster, cheaper alternative to SANs. However, for other environments, SANs may be the only option. Just make sure to test your system before buying it.

Friday, January 15, 2010

Enter the SAN

Some backup software vendors attempted to solve the cost problem by allowing a single library to connect to multiple hosts. If you purchased a large library with multiple SCSI connections, you could connect each one to a different host. This allowed you to share the tape library but not the drives. While this ability helped reduce the cost by sharing the robotics, it didn't completely remove the inefficiencies discussed earlier.

What was really needed was a way to share the drives. And as long as the tape drives were shared, disk drives could be shared too. What if:

A large database server could back up to a locally attached tape drive, but that tape drive could also be seen and used by another large server when it needed to back up to a locally attached tape drive?

The large database server's disk could be seen by another server that backed up its disks without sending the data through the CPU of the server that's using the database?

The disks and tape drives were connected in such a way that allowed the data to be sent directly from disk to tape without going through any server's CPU?

Fibre Channel and SANs have made all of these "what ifs" possible, including many others. SANs are making backups more manageable than ever - regardless of the size of the servers being backed up, In many cases, SANs are making things possible that weren't conceivable with conventional parallel SCSI or LAN-based backups.

Wednesday, October 21, 2009

What are SANs and NAS?

Throughout the history of computing, people have wanted to share computing resources. The Burroughs Corporation had this in mind in 1961 when they developed multiprogramming and virtual memory. Shugart Associates felt that people would be interested in a way to easily use and share disk devices. That's why they defined the Shugart Associates System Interface (SASI) in 1979. This, of course, was the predecessor to SCSI - the Small Computer System Interface. In the early 1980s, a team of engineers at Sun Microsystems felt that people needed a better way to share files, so they developed NFS. Sun released it to the public in 1984, and it became the Unix community's prevalent method of sharing filesystems. Also in 1984, Sytec developed NetBIOS for IBM; NetBIOS would become the foundation for the SMB protocol that would ultimately become CIFS, the predominant method of sharing files in a Windows environment.

Neither storage area networks (SANs) nor network attached storage (NAS) are new concepts. SANs are simply the next evolution of SCSI, and NAS is the next evolution of NFS and CIFS.

History

As mentioned earlier, SCSI has its origins in SASI, defined by Shugart Associates in 1979. In 1981, Shugart and NCR joined forces to better document SASI and to add features from another interface developed by NCR. In 1982, the ANSI task group X3T9.3 drafted a formal proposal for the Small Computer System Interface (SCSI), which was to be based on SASI. After work by many companies and many people, SCSI became a formal ANSI standard in 1986. Shortly thereafter, work began on SCSI-2, which incorporated the Common Command Set into SCSI, as well as other enhancements. It was approved in July 1990. Although SCSI-2 became the de facto interface between storage devices and small to midrange computing devices, not everyone felt that traditional SCSI was a good idea. This was due to the physical and electrical characteristics of copper-based parallel SCSI cables. (SCSI systems based on such cables are now referred to as parallel SCSI, because the SCSI signals are carried across dozens of pairs of conductors in parallel.) Although SCSI has come a long way since 1990, the following limitations still apply to parallel SCSI:
  • Parallel SCSI is limited to 16 devices on a bus.
  • It's possible, but not usually practical, to connect two computing devices to the same storage device with parallel SCSI.
  • Due to cross talk between the individual conductors in a multiconductor parallel SCSI cable, as well as electrical interference from external sources, parallel SCSI has cable length limitations. Although this limitation has been somewhat overcome by SCSI-to-fiber-to-SCSI conversion boxes, these boxes aren't supported by many software and hardware vendors.
  • It's also important to note that each device added to a SCSI chain shortens its total possible length.

Wednesday, October 14, 2009

Nothing but net(work): Why you need one

Wireless home networking isn't just about linking computers to the Internet. Although that task is important - nay, critical - in today's network-focused environment, it's not the whole enchilada. Of the many benefits of having wireless in the home, most have one thing in common: sharing. When you connect the computers in your home through a network, you can share files, printers, scanners and high-speed Internet connections between them. In addition, you can play multiuser games over your network, access public wireless networks while you're away from home, check wireless cameras, use Internet Voice over IP (VoIP) services, or even enjoy your MP3s from your home stereo system while you're at work - really!

Reading Wireless Home Networking For Dummies, 3rd Edition, helps you understand how to create a whole-home wireless network to reach the nooks and crannies of your home. The big initial reason that people have wanted to put wireless networks in their homes has been to 'unwire' their PCs, especially laptops, to enable more freedom of access in the home. But just about every major consumer goods manufacturer is hard at work wirelessly enabling its devices so that they too can talk to other devices in the home - you can find home theater receivers, music players, and even flat-panel TVs with wireless capabilities built right in.

People go with wireless networking for:
  • File sharing
  • Internet connection sharing
  • Printer and peripheral sharing

Tuesday, October 13, 2009

Introduction to Wireless Networking

Over the past 5 years, the world has become increasingly mobile. As a result, traditional ways of networking the world have proven inadequate to meet the challenges posed by our new collective lifestyle. If users must be connected to a network by physical cables, their movement is dramatically reduced. Wireless connectivity, however, poses no such restriction and allows a great deal more free movement on the part of the network user. As a result, wireless technologies are encroaching on the traditional realm of 'fixed' or 'wired' networks. This change is obvious to anybody who drives on a regular basis. One of the 'life and death' challenges to those of us who drive on a regular basis is the daily gauntlet of erratically driven cars containing mobile phone users in the driver's seat.

Wireless connectivity for voice telephony has created a whole new industry. Adding mobile connectivity into the mix for telephony has had profound influences on the business of delivering voice calls because callers could be connected to people, not devices. We are on the cusp of an equally profound change in computer networking. Wireless telephony has been successful because it enables people to connect with each other regardless of location. New technologies targeted at computer networks promise to do the same for Internet connectivity. The most successful wireless data networking technology this far has been 802.11.