Welcome to elreno.org

Web Hosting - The Internet and How It Works In one sense, detailing the statement in the title would require at least a book. In another sense, it can't be fully explained at all, since there's no central authority that designs or implements the highly distributed entity called The Internet. But the basics can certainly be outlined, simply and briefly. And it's in the interest of any novice web site owner to have some idea of how their tree fits into that gigantic forest, full of complex paths, that is called the Internet. The analogy to a forest is not far off. Every computer is a single plant, sometimes a little bush sometimes a mighty tree. A percentage, to be sure, are weeds we could do without. In networking terminology, the individual plants are called 'nodes' and each one has a domain name and IP address. Connecting those nodes are paths. The Internet, taken in total, is just the collection of all those plants and the pieces that allow for their interconnections - all the nodes and the paths between them. Servers and clients (desktop computers, laptops, PDAs, cell phones and more) make up the most visible parts of the Internet. They store information and programs that make the data accessible. But behind the scenes there are vitally important components - both hardware and software - that make the entire mesh possible and useful. Though there's no single central authority, database, or computer that creates the World Wide Web, it's nonetheless true that not all computers are equal. There is a hierarchy. That hierarchy starts with a tree with many branches: the domain system. Designators like .com, .net, .org, and so forth are familiar to everyone now. Those basic names are stored inside a relatively small number of specialized systems maintained by a few non-profit organizations. They form something called the TLD, the Top Level Domains. From there, company networks and others form what are called the Second Level Domains, such as Microsoft.com. That's further sub-divided into www.Microsoft.com which is, technically, a sub-domain but is sometimes mis-named 'a host' or a domain. A host is the name for one specific computer. That host name may or may not be, for example, 'www' and usually isn't. The domain is the name without the 'www' in front. Finally, at the bottom of the pyramid, are the individual hosts (usually servers) that provide actual information and the means to share it. Those hosts (along with other hardware and software that enable communication, such as routers) form a network. The set of all those networks taken together is the physical aspect of the Internet. There are less obvious aspects, too, that are essential. When you click on a URL (Uniform Resource Locator, such as http://www.microsoft.com) on a web page, your browser sends a request through the Internet to connect and get data. That request, and the data that is returned from the request, is divided up into packets (chunks of data wrapped in routing and control information). That's one of the reasons you will often see your web page getting painted on the screen one section at a time. When the packets take too long to get where they're supposed to go, that's a 'timeout'. Suppose you request a set of names that are stored in a database. Those names, let's suppose get stored in order. But the packets they get shoved into for delivery can arrive at your computer in any order. They're then reassembled and displayed. All those packets can be directed to the proper place because they're associated with a specified IP address, a numeric identifier that designates a host (a computer that 'hosts' data). But those numbers are hard to remember and work with, so names are layered on top, the so-called domain names we started out discussing. Imagine the postal system (the Internet). Each home (domain name) has an address (IP address). Those who live in them (programs) send and receive letters (packets). The letters contain news (database data, email messages, images) that's of interest to the residents. The Internet is very much the same.

Copyright Infringement Statistics Copyright infringement statistics, by most standards are inflated. Most recent copyright infringement statistics cite that almost 30 percent of software is pirated in the United States of America. This means that they think 30 percent of the software on your computer is illegal… they think we’re all thieves, to an extent. However, copyright holders have good reason to worry that we’re violating their rules: the number of suspects referred to the United States attorneys with an Intellectual Property lead charge increased twenty six percent in the period between 2002 and 2004 – and there have been studies that show that this is rising. Copyright infringement statistics are difficult to come by, but it’s plain to see it’s affecting every aspect of intellectual copy. Copyright infringement statistics show that in addition to software privacy, there are a lot of violations in the music world. Copyright infringement statistics show that many unsuspecting people, from college students to thirty-something a professional, download music on a consistent basis, and often it’s not downloaded legally. Often times, someone will download a song off a MySpace or YouTube page, without giving thought to who really owns the copyright and if it’s legal for them to have it. Copyright infringement statistics, brought to us by the music recording industry, would have us believe that online infringement is seriously hurting the recording industry. A sensible person, however, would realize that with the abundance of MP3 sales sites that this will turn quickly and recording giants will see the huge profits available online. It’s already begun, you see, we have yet to see the impact of online music sales, and how it will increase revenue. I’m sure, with the huge talent pool at their disposal, the media giants will find a way to monetize the internet to their fullest advantage. Copyright infringement statistics also show that many people are downloading games off the internet. With the litany of games available to us – from complete alternate worlds such as World of Warcraft to the more mainstream “The Sims” series, people are clamoring for PC games – and for good reason. They’re fun, intelligent games that play on a system everyone has – a computer. Because of this, people are always looking for new games to play and download, and they may download a game without knowing that it’s not ‘freeware’ (as many internet games are). In addition to computer games, copyright infringement statistics also show that movies are downloaded in abundance on the internet. Many peer to peer file distribution sites and programs (such as bit torrent or Kazaa) allow for the transfer of very large files, and they’re easy to find online. Using a tool provided by one of many suppliers, users can search for any item they like – and, of course, the system is abused and people download copyrighted movies and entire DVDs instead of publicly available works. Copyright infringement also branches into written works, such as articles, books, poems, etc. Many times, a student will copy a paragraph or two without realizing the implications of such copying. While they may think of it as ‘borrowing’, if it’s used on a grander scale, the person could be opening themselves up to a large court fight, especially if it’s used commercially. As you can see, copyright infringement statistics show us that many people are using copyrighted works illegally. Do your best diligence when using another’s work – and ask for permission every time you want to use something that you haven’t created. Chances are, if you just ask the question up front you’ll save yourself from becoming another copyright infringement statistic and save yourself from a major lawsuit.

Web Hosting - Redundancy and Failover Among the more useful innovations in computing, actually invented decades ago, are the twin ideas of redundancy and failover. These fancy words name very common sense concepts. When one computer (or part) fails, switch to another. Doing that seamlessly and quickly versus slowly with disruption defines one difference between good hosting and bad. Network redundancy is the most widely used example. The Internet is just that, an inter-connected set of networks. Between and within networks are paths that make possible page requests, file transfers and data movement from one spot (called a 'node') to the next. If you have two or more paths between a user's computer and the server, one becoming unavailable is not much of a problem. Closing one street is not so bad, if you can drive down another just as easily. Of course, there's the catch: 'just as easily'. When one path fails, the total load (the amount of data requested and by how many within what time frame) doesn't change. Now the same number of 'cars' are using fewer 'roads'. That can lead to traffic jams. A very different, but related, phenomenon occurs when there suddenly become more 'cars', as happens in a massively widespread virus attack, for example. Then, a large number of useless and destructive programs are running around flooding the network. Making the situation worse, at a certain point, parts of the networks may shut down to prevent further spread, producing more 'cars' on now-fewer 'roads'. A related form of redundancy and failover can be carried out with servers, which are in essence the 'end-nodes' of a network path. Servers can fail because of a hard drive failure, motherboard overheating, memory malfunction, operating system bug, web server software overload or any of a hundred other causes. Whatever the cause, when two or more servers are configured so that another can take up the slack from one that's failed, that is redundancy. That is more difficult to achieve than network redundancy, but it is still very common. Not as common as it should be, since many times a failed server is just re-booted or replaced or repaired with another piece of hardware. But, more sophisticated web hosting companies will have such redundancy in place. And that's one lesson for anyone considering which web hosting company may offer superior service over another (similarly priced) company. Look at which company can offer competent assistance when things fail, as they always do sooner or later. One company may have a habit of simply re-booting. Others may have redundant disk arrays. Hardware containing multiple disk drives to which the server has access allows for one or more drives to fail without bringing the system down. The failed drive is replaced and no one but the administrator is even aware there was a problem. Still other companies may have still more sophisticated systems in place. Failover servers that take up the load of a crashed computer, without the end-user seeing anything are possible. In fact, in better installations, they're the norm. When they're in place, the user has at most only to refresh his or her browser and, bingo, everything is fine. The more a web site owner knows about redundancy and failover, the better he or she can understand why things go wrong, and what options are available when they do. That knowledge can lead to better choices for a better web site experience.