08.13.02

GUI, Considered … Something

Posted in Lectures at 9:58 pm by admin

I would like to expound upon a theory that I have held for some time that, conceptually, GUIs can lead to poor management practices.

First, let me discuss the alternatives to graphical user interfaces (GUIs.) Since GUIs were designed to replace command line interfaces (CLIs) and there was really nothing before the CLI and nothing new since the GUI, the only alternative is the CLI. Well, OK, there is the menu driven interface that was the shadow between CLI and GUI, but the concept of a menu driven interface is more similar to GUI than CLI, so I’ll just lump it in.

A CLI is a model for interacting with a computer system based upon command/response pairs. An example command might be “dir bob” which could mean, “Computer, I would like a listing of the directory ‘bob’.” The computer would probably respond with a list of things which could be found in ‘bob’ assuming that this is a command that the computer in question understood. Keep in mind that the CLI was invented in the infancy of the computing science and there are quite a few different ones to choose from.

In the early days of computing, we wasted a lot of paper. Some would argue that we still do, but in this case I am refering to the printing console. It may seem archaic in the modern era of the monitor console, but we used to go through boxes of graybar paper every month by using paper to hold every command/response pair. This was horribly wasteful because, in general, a computer user only needed to see the response for a few minutes or even seconds. Still, the mechanism created a permanent record.

For a single user computer, like the early IBM PC, the computer spent most of its time waiting for the user to tell it what to do next. So, if a user ran the “dir bob” command, and never touched the bob directory again, the response would be valid forever. With multi-user systems, this is not true and users have become familiar with this concept. A good example is a command that outputs the current time. Since the computer doesn’t distinguish between a printing terminal and a monitor terminal for normal commands, the time does not update on the screen, just like it would not update on the paper. Users are familiar with the idea that the information output by the computer is out of date the instant after it is displayed.

By the early 1980s higher end systems had terminals with screens, but the command line metaphor of the screen as a piece of paper still held true for basic interaction with the computer. Certainly there were word processors and other applications that were more interactive, but once you exited the word processor, you were back to the command/response behavior. The CRT terminal was so successful that when IBM came out with their PC, even though it could work with a terminal, IBM built in the keyboard and the ability to interact directly with the monitor rather than requiring a seperate terminal device.

Enter the GUI.

The original GUI was developed by Xerox at the Palo Alto Research Center, Xerox PARC. The original design called for a WYSIWYG (what you see is what you get) word processor. After a while, the development team decided to use this “Desktop Metaphor” to interact with the computer at every level. So, instead of creating directories with files, you would create file cabinets with files. Each of these objects had a little picture that represented it on a graphical screen. Just as you can close a file cabinet or drawer and not see all of the folders in it, you can close the little picture of the file cabinet, hiding its files. This proved to be a wildly successful method of making the computer concepts accessable to non-computer savy individuals.

Because the lab at Xerox PARC was designing a system from the ground up with a GUI interaction in mind, their GUI had hooks into the behavior of objects. This meant that, if the state of an object changed, its little picture on the screen could track that change. For instance, if one user deleted a file from a particular cabinet and another user had that cabinet displayed on their desktop, the picture representing that file in that cabinet disappeared.

Unfortunately, this behavior is not pervasive while the expectation of this behavior is. Many software developers, recognizing the success of the GUI developed GUIs for their systems that would run the command line equivalents and display the results graphically. This sounds like a good idea. It is frequently not a good idea. The reason it is not a good idea is that the GUI and the real state of things can get out of sync.

An example: Suppose that you manufacture a large disk array and that the management software was written using a command line. Now you want to create a GUI “wrapper” so your customers will like you. So you do, but you don’t change the command line, you just run the commands and display the results graphically. So, two users have the GUI open and one allocates a bunch of disks in the array, but the second one doesn’t know about it. When the second user comes back, he still sees the allocation of disks when he started his copy of the management GUI. So, he selects several disks and allocates them. Unfortunately, he selects some of the same disks user 1 already allocated. If the command line isn’t designed well enough to realize that this is a mistake, it may very well allow this behavior and destroy user 1’s data. Oops.

OK, so you fix this by either having the software poll the array periodically or you update the GUI before allowing any actions. Polling is slow, and frequently costly of system resources. In any case, it doesn’t solve the problem; you can still be out of sync, just less frequently. So, you go with option 2 and force an update of the GUI state before any action takes place. You still have the problem that user 1 can update and start and action and before it finishes user 2 updates and starts an action and they collide.

The only solution that makes sense is one where the GUI interacts with the managed system and constantly updates the screen with correct information. If the user of the GUI selects an object, then the fact of that selection must be reflected instantly in all other copies of the GUI. Any other behavior leads to chaos.

05.15.02

TCP/IP Is Not Like An Onion

Posted in Lectures at 9:57 pm by admin

Take a class on networking at any major university or take any networking equipment vendor’s network training and they will trot out the Open Systems Interconnect (OSI) model of a network. This is a lame model. Real networking doesn’t even remotely resemble this model. However, despite the overall uselessness of the OSI model, some people have gone to the unreasonable extreme of redefining TCP/IP in terms of that model. This is even more ridiculous

TCP/IP was not designed; It has evolved as our understanding of computer communications has evolved.

The OSI model was developed by a seven committee working group of the ISO. (Don’t worry. There won’t be a quiz on this material.) The seven committees provides one possible reason that the OSI model has seven layers.

The seven layers of the OSI model are:

  • Application
  • Presentation
  • Session
  • Transport
  • Network
  • Data Link
  • Physical

The theory of operation with each of these layers is that, as an application, you need only understand the Application layer in order to use the network. When designing the Session layer, you need not be bothered with implementation details of the Network, Data Link and Physical layers. Each layer hides the implementation details of the layers above and below. In the real world, for reasons I wish I could adequately explain, this is a very slow network.

My inadequate explanation is: speed and accuracy are the most important parts of a network. You will likely write a program that does a networking task once for millions of actual uses of that program. In order for the 7 layer model to work, each layer must hide implementation details of all layers below and all layers above. This means that you cannot depend on a particular behavior in the Physical layer at the Session layer. So, suppose that your Session layer could be simplified for the case of a reliable network. You cannot take advantage of that and skip all the intervening layers to just deliver the data. Your data must slog its way through four more layers to get to the consumer and then four layers there to get back to another Session layer. This takes time. More importantly, it represents “latency” which is sometimes hard to overcome.

Another example: Let’s suppose that you have an application that, for locally connected systems uses a broadcast behavior but for distant systems uses a monocast behavior. This allows the local systems to each receive the data using minimal bandwidth and the remote systems to receive the data in a more reliable stream. If the implementation below the Presentation layer is hidden from you, you cannot see which systems are local and which are not. So, you cannot write that application

The problem is that an application that doesn’t understand how networks work is poorly written.

An aside: There are a few lines of code that appear almost identical in almost every TCP/IP client program and another few that appear in almost every server program. These lines were published in some textbook or some journal and now everyone uses them. They are almost used as incantations, without any understanding of what they do on the part of the programmer. They are popular because, for 90% of networking, this is all you need to make the connection from client to server. Unfortunately, in many cases of the other 10%, these code fragments are misapplied or are simply inappropriate resulting in bad operation or performance or unusual behavior.

With the academic success of the OSI model, the TCP/IP networking community redefined their model of a network in terms of the OSI model. This model has the following five layers:

  • Application
  • Transport
  • Network
  • Data Link
  • Physical

It is important at this point to discuss why you make a model of a system. In architecture (buildings) you build a model to show the customer what the finished job will look like. In software systems, a model is more like the architect’s blueprints. It allows a single team to coordinate the behavior of a great deal of code that they may never see. So, why design a model for an existing system? Simple, it becomes a teaching and behavior predicting tool. A good example of this is “The Ideal Gas Model” of physics. It is a pretty good predictor of the behavior of gases. It is wrong at the extremes of temperature and pressure, but it provides a teaching tool and a set of equations that are “close enough” for everyday use. It is obvious that we didn’t come up with the Ideal Gas Model and then design the behavior of gases around it. It just happens to “fit” the behavior pretty well. There are many examples of this sort of model in physics.

So, the next question becomes: Is the 5 Layer TCP/IP model a “good” model of TCP/IP networks.

Duhhh, No. It is not.

TCP/IP is really a 2 layer system. The TCP/IP part of the system is one layer and the Physical/Data Link is the other. While you can certainly design a thing to replace the TCP part of TCP/IP, it would no longer be TCP/IP then, wouldn’t it? Likewise, TCP was actually designed before IP. IP was designed to solve the problem of multipley interconnected networks. Before IP was designed, however, there wasn’t something else performing its functions at a software layer in TCP/IP because there was no such thing as IP. Get it?

The reason that there are only two layers is simple. You cannot write a program and interact with only the TCP layer because you have to use code that finds out the IP address of the system you wish to contact. This means that the TCP layer does not hide diddly-squat from you. Those magical lines of code mentioned in the aside above interact at every layer except the bottom two. These two layers are typically some form of Ethernet, at the host level, which defines them both and hides neither.

TCP/IP, while it is not “layered” is “encapsulated.” It works like this. Your application creates a hunk o’ data, HOD (which may be a single byte, or octet) and says to the network library, “Send this stuff.” If you are using TCP/IP, this HOD is wrapped up in a thing called a TCP packet and passed to the IP part of the library. There, the new packet has an IP header attached to the fron and the whole schmear gets handed over to the network card driver. That driver interacts with the network card itself to load the packet of data into a transmitter circuit in the appropriate format for transmission. So your data is encapsulated by a TCP packet. That is encapsulated by an IP packet and THAT is delivered by some means determined by you network hardware to an appropriate reciever (which may be the end host or a router.)

The key to all of this is that, while it looks like TCP sits on top of IP and IP sits on top of the network card driver and the network card driver sits on top of the network card and the network card sits on top of the interconnection, nobody designs things this way. Even if they did, you couldn’t use TCP without knowing that IP (or something like IP) was under it. The software interface is just not designed as layers.

Some of you may be wondering why I wrote all this. The truth is, I have to take a test in a few days and for that test I have to know the OSI and TCP/IP network “models.” Learning academic overkill crap just to get a little symbol on my business cards (which I don’t have) irked me a bit.

Thus endeth the diatribe.