December 27, 1999
By Karen Kenworthy
IN THIS ISSUE
Hope everyone had a wonderful holiday weekend! My baby brother (he's 40, with a wife and five kids) wasn't able to come home this year. But I was able to celebrate Christmas morning with my mother, father, and my other brother and his family. Later that day I spent time with several dear friends. Along the way I heard from many of you, via e-mail, kindly thanking me and wishing me a Merry Christmas and Happy New Year. If your week was even half as enjoyable as mine, you were fortunate indeed!
Several of you also wrote to say you downloaded last week's Power Toy (https://www.karenware.com/powertools/pttoy) for your children or grandchildren (I had -no- idea how many grandparents are among us!). It sounds like Peedy and his gang were hits, at least among the toddler set. I have to admit that even I spent more time than usual, "testing" last week's program. Don't know what it is about the little guy, but Peedy sure does grow on you. :)
A Bit on Bits
On a more serious note, reader Brad Knaack wrote recently to say he'd found and fixed a bug in Karen's Snooper. As you may remember, this Power Tool runs invisibly, recording the start and stop times, and names, of every program run on your computer.
The original Snooper was born back in August, 1997, as a 16-bit program written for Windows 3.x. Later, I converted it to a 32-bit program, to make it more comfortable running under newer, 32-bit operating systems like Windows 9x, and Windows NT. Unfortunately, my conversion wasn't complete. A remnant of the Snooper's 16-bit origins remained.
What, you may be wondering, is the difference between a 16-bit and 32- bit program? We've all heard that 32-bit programs are somehow "better," but why? And how? Like many things in the world of computers, the answer is simple, and complex.
We humans use 10 different symbols (called digits) to represent numbers: 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. If we come across a number larger than 9, we use two or more symbols to represent it. Examples of two- and three-digit numbers are 15 and 326. Larger numbers require even more digits, such as 14,938,083. I can provide more examples if needed.
OK. Right about now you're probably thinking "Thanks for treating me like a wonder boy. I know how to count." Well, get used to it. There's more simple stuff to come ...
You see, computers aren't as lucky as you and me. They only have two symbols to represent numbers: 0 and 1. When a number gets too large for just one of these (which happens pretty quickly), computers use more symbols (just like us). For example, to represent nine, we write the single digit "9." A computer must use four symbols, "1001."
To keep things straight, the symbols computers use to represent numbers are aren't called digits. Instead, they are called "bits," short for "binary digits" (binary means "has two values").
A Word on Words
To us, a number is large if it has a lot of digits. To computers, a number is large if it has a lot of bits. How large is large? That depends on how smart you are. The first micro-computers (computers consisting of a single electronic circuit on a chip) could add two 4-bit numbers at a time. Pretty impressive, until you realize that the largest 4-bit number ("1111") is the same as our two-digit number 15. Micro- computers of this era powered the first hand-held calculators.
Pretty soon, the intelligence of micro-computers doubled. Now they were adding 8-bit numbers in a flash. An 8-bit number can be as large as our number 255. That may not seem like much, but these computer chips started the personal computer revolution. The earliest "PCs," such as the Apple II and the Tandy Model I, relied on such micro-processors.
The number of bits a processor can add at one time is called the processors "word size." So the earliest processors managed 4-bit words, while later models sported jumbo 8-bit words. The chip inside the original IBM PC (the Intel 8088) was a hybrid. Some of its instructions only affected eight bits at a time, while others manipulated 16 bits at once. Marketing folks, seeing the glass as half full, called it a 16-bit processor, while more pessimistic programmer types often considered it an 8-bit machine.
A Handle on Handles
Before long, the confusion was eliminated by the introduction of true 16-bit processors. One of them, the Intel 80286, powered the IBM AT personal computer, and later ran the first version of Windows. Since Windows was designed to run on a "16-bit machine," whenever possible it used 16-bit numbers. For example, the location of a program within memory was specified by a pair of 16-bit numbers (called the "base" and "offset"). Fonts, icons, cursors, windows, and other frequently structures were assigned unique 16-bit identifiers, called handles. As long as Windows programs were small, and few in number, everything worked pretty well. But by the time Windows 3.0 and 3.1 were released, things were beginning to fall apart. Increasingly, the pool of available handles (called heaps) ran dry. When this happened, Windows complained it had "Insufficient System Resources" and died. To make matters worse, using a pair of 16-bit numbers to address memory meant Windows could see no more than 16MB of RAM. While that much RAM once cost $16,000 dollars, and would run any program in existence, those days were long gone.
Fortunately, about this time 32-bit processors such as the Intel 80386 arrived. These processors use 32-bit numbers to address memory, allowing them to use up to four billion bytes of RAM (still a large amount today, but not for long). They also move and manipulate data 32-bits at a time. But for a while, no programs asked them to perform these amazing feats. Windows, and Windows applications, were still designed for the more limited 16-bit processors of an earlier age.
As you probably know, this changed with the release of Windows 95. It took advantage of these new 32-bit processors. It used single 32-bit numbers to specify locations within the computers memory, making possible the large amounts of RAM we use today. It also started the custom of assigning 32-bit identifies, or handles, to structures such as fonts, windows, icons, etc., ensuring that there would be enough unique handles for years to come.
Well sort of. Actually, although Windows 95 (and Windows 98) set aside 32-bits to store a handle, they only use half that amount. This was done to remain compatible with older 16-bit Windows programs (once the only kind in existence). Those programs only saw the first 16-bits of each handle, the 16-bits that Windows 9x actually used.
In fact, this is what the original version of the Winmag.com Snooper did. In its original incarnation, it set aside just 16 bits to store the handle of each program running on the computer (yes, running programs are assigned handles too). Later, when creating the 32-bit version of Snooper, I tried to have it use 32 bits to store each handle. But I overlooked one place. There, Snooper still used just 16 bits for handle storage.
This went unnoticed until the arrival of Windows NT. Often called the first true 32-bit version of Windows, it uses the full 32 bits of each handle, confounding programs that expect only 16. One such program was the Snooper. When it tried to store a Windows NT handle in 16 bits of RAM, some bits wouldn't fit. As a result, Snooper died after exclaiming "Overflow Error."
The solution? Change the size of the memory location where Snooper stored handles. In parlance of the Visual Basic programming language, a variable was declared as "Long," rather than "Integer." In the whole program, only one word was changed to fix this bug. But it made all the difference in the world. :)
'Till Next Year
The good earth will be another year older the next time we get together. Until then, here's hoping all your computer problems are little ones. Have a great time celebrating the New Year. But don't party too hard. I'll be looking for you bright and early next Monday. And if you see me on the 'net, or at Times Square, be sure to wave and say "Happy New Year!"