Thursday, August 26, 2010

Luddite Technology – Crystal Radios

Listen to the radio without using any electricity, including electricity stored in a battery!

That sounds too good to be true, doesn’t it? Well, I’m here to tell you that it’s not only possible, but relatively easy to do and can even be done with no special parts (although using some special parts can definitely improve the performance). There is a catch of course, namely that without having a source of electricity, the signal you use won’t be very loud, and you’ll have to listen via ear phones (and a particular type at that – more later).

The simplest of radios can be built with simple wire, a crystal or equivalent, and an earphone. I’ve built this kind of radio, and while it works – which seems miraculous enough – its performance is less than spectacular. It only picks up the very strongest of AM stations, and it picks them all up at once. Fortunately, we can do much better by adding more wire in the correct configuration, a tunable capacitor or two, and a handful of other innovations.

Simplest Possible Radio
(Courtesy of Wikipedia)

More specific and detailed construction information can be found on the Internet , and in the interest of space, I won’t go into the details here – at least not yet. My daughter and I are about to start working on building our own crystal sets, and I’ll post the results and pictures of the results (I have built crystal radios in the past, but none of them have survived the frequent moves I’ve made, so I can attest to the fact that they do work). What makes this topic of interest, and of importance to Luddites, is that absolutely everything needed to establish contact with the outside world can be made at home, with little knowledge – with the exception of the earphone (this could also be made at home, but would require a little more knowledge and skill), and without using any external power.

Now don’t get me wrong. A crystal radio is a poor substitute for even the cheapest modern AM/FM radio in terms of ease of use, portability, and sound fidelity. If you have one, I wouldn’t throw it away simply because you can now make you own. I would however, still build one, simply because you can. Why? Because doing so is a step toward independence. It’s one less chain binding you to the culture of external dependence.

It’s also a good example to show that technology itself isn’t the enemy. In its proper place, technology can be liberating and empowering. As I’ve said many times, technology is supposed to work for human, humans aren’t supposed to work for technology and that includes the self and externally imposed slavery of having to work to acquire it. The crystal radio is enabling because it provides a path for those who actually need a radio to acquire one without undue expense of resources.

A historical example of the utility of the crystal radio can be found in World War II. GI’s in the European Theater listened to radios for both news and morale. Clever German scientists and engineers discovered a way to detect the presence of US troops by the signal generated in the local oscillator of their portable radios. The radios were consequently banned; leaving the GI’s to do without a vital link to the outside world. Clever soldiers began building crystal radios with materials they had on hand, restoring the link. The radios they built were crystal sets, which don’t have a local oscillator, and thus did not give away their position.

Part of what makes the foxhole radios so amazing from a technical perspective, is that they didn’t actually have a crystal. The detector (the role the crystal plays) was created by holding a razorblade to a flame and using the scorch-mark as a primitive semi-conductor – in effect, a diode. That is sheer, liberating genius, and even if it didn’t play a major role, anything that kept the morale of our soldiers up certainly helped the Allies win the war.

There is a fly in the ointment to all of this, however. These crystal radios are restricted to analog signals. The current push toward digital broadcast will make them obsolete. This, strictly speaking, doesn’t mean that you can’t still build your own radios, but it does mean that you must have an external power source because the electronics needed to decode a digital signal consume power.

I work in the field of technology and communications, and although it hasn’t really been a career enhancing position, I’ve long been opposed to the digitalization of media and communications. My opposition is based on the principle of accessibility. An analog signal is accessible in a much wider array of circumstances than a digital signal, albeit at the expense of quality. The human ear (and eye) can listen or see around a great deal of noise such as static, artifacts, or other interference. A digital signal is either perfect or unavailable, due to the nature of the decoding the ones and zeros that make up a digital signal, and in marginal conditions digital signals fail long before analog systems. They are also far more dependent on stable power conditions. (There are a few exceptions to this statement, predominantly found in the world of HAM radio, which I’ll discuss in future posts, but these aren’t actually ‘digital’ modes of communications – a CW signal (morse code) for example, is either ‘on’ or ‘off’, but the actual information isn’t directly contained in the ‘on’ or ‘off’ state, and the human ear can decode weak CW signals through a process of inference).

The move to digital communications and media is driven by the pursuit of money. Manufacturers want to sell you the equipment needed to decode the equipment, and thus the encoding schemes are often proprietary for the express purpose of limiting access to customers. Media content providers encrypt their products to limit the ways in which you use your purchases. In principle this is fair enough, and everyone deserves to be rewarded fairly for the fruits of their labors. In practice though, they are accomplishing these goals through the use of the common airwaves, which they hold only through public trust – part of that trust is that they provide critical information and make it freely available. The transition of the public airwaves to proprietary formats is, in my opinion, a violation of that trust.

Tuesday, August 17, 2010

Food Culture

“Eat food or die,” a guy I know once said. This seems beyond obvious, but history has shown us many instances where people, both collectively and individually, have chosen to die rather than to eat. It would be easy to dismiss these instances as proof that the gene-pool is self-chlorinating, but I think that to do so is to ignore a valuable lesson.

A good example of this phenomenon – of people choosing to die rather than to eat – can be found in the Viking culture of Greenland. The Vikings settled in Greenland during an uncharacteristically warm climatic period which enabled them to bring with them the agricultural and culinary practices of their homeland. Eventually, the climate swung back to normal and their agricultural and culinary practices could no longer be maintained. The Viking culture of Greenland waned to virtual non-existence, with a dramatic increase in deaths due to mal-nutrition, and an emigration of Vikings back to their homelands.

One of the curious aspects of the Greenland Viking culture to anthropologists and historians is that they didn’t eat fish. I don’t like fish either, but then I don’t live in Greenland. The Inuit people – the closest thing Greenland has to a native people – survived the same period that did the Vikings in by subsisting on fish and other marine life as they have for millennia. Food, perfectly normal and acceptable food, was literally swimming all around the Vikings, but rather than change their diets they instead chose to starve and leave. No one is really sure why the Greenland Vikings refused to eat fish. It seems an odd thing for a sea-faring people, especially when their Nordic parent culture includes sea food. Whatever the reason though, for several generations, they simply refused to acknowledge that fish were food, and this led to their demise.

The point of this brief history lesson is that what we consider to be food isn’t solely dependent on what’s available, on edibility, nutrition, or anything scientific. As members of a culture, we subscribe to a particular set of rules that informs us what is ‘food’ and what isn’t. This is something I touched on in my last post dealing with weeds – perfectly edible and nutritious foods are growing wild in our yards and green spaces, but most people not only refuse to eat them, but are completely oblivious to the fact they are actually food. Furthermore, those brave souls who are willing to buck tradition and harvest this bounty and thought of as strange.

This is the difference between food and food culture. ‘Food’ is merely an edible substance. ‘Food Culture’ is a set of rules, generally tied to a place and a group of people, which specify food as acceptable, how food is gathered or produced, the proper methods of preparation, when to eat, how to eat, and how much to eat. Traditional food culture is tied to a place and to a people and is thus a local or regional phenomenon as well as an ethnic one. It is in fact a part of how ethnicity and regionality are determined. Even in our modern world, there is a food culture, in spite of the fact that in its current state, it is focused on eliminating regional and ethnic variations to develop a kind of culinary hegemony.

The verbalized principle of modern food culture is that anyone should be able to eat whatever they want whenever they want it, so long as they can afford it. This is accomplished through market economics – producing foodstuffs where it is cheap to do so, and transporting them to a place where they can fetch the best prices, thereby enriching the producers as the window of season shifts around the world, and theoretically at least, ensuring that everyone has a rich and varied diet. This is a laudable goal, but ultimately rests on flawed assumptions about the nature of food, productivity, economics, culture, and human nature.

Obviously, as the case of the Greenland Vikings illustrates, traditional food cultures can fail, although one could make the argument that it was a failure on the part of the Vikings to realize that food culture is tied to a place as well as a people. The sum total of food culture however, is meant to educate the people in a particular place on how to best, most effectively, and safely utilize the food resources of their area in a sustainable manner. In a similar way, the modern food culture has potential flaws in that it see that food culture is tied to either a people or a place. This creates a systemic vulnerability and a dependence which represents a danger to everyone.

Danger? Yes, Danger! If you’re like most people in this country, your food comes from somewhere other than where you live. Chances are that you actually eat very little that comes from where you live. Implicitly this means that your life depends on someone delivering one of the basic requirements for life. What happens if they stop doing it? What are YOU going to eat, and where are YOU going to get it from, and how are YOU going to get it?

These questions are neither rhetorical nor trivial. They aren’t necessarily even meant to prompt you into trying to grow and produce all of your own food. The point of asking them is point out that food has to come from somewhere, and if it isn’t coming from somewhere else, it has to come from where you are. There’s a pretty good chance that food is growing and being produced somewhere pretty close to you, but if you don’t know where that is, or what forms it takes, it isn’t going to do you much good.

That, ultimately, is what this post is about. It’s a question I’ve started asking myself, my friends, and my neighbors. How is your food culture defined? What are you doing to ensure that your food culture is sustainable and viable? If all external inputs were removed, would you eat food or die?

Tuesday, August 10, 2010

Enough with the salad already…..

I know it seems like I’m fixated or something, but this will be the last one for a while:

My latest salad creation:

Bowl full of mixed baby arugula and lemon grass (ideally, cut fresh from your salad bowl)

1 thinly sliced red onion

½ thinly sliced cucumber

Vinegar, salt, pepper, grated cheese (if desired)

Place sliced onions in a sieve and soak in cold water for half hour. Throw away water. Add sliced cucumber to sieve. Add ½ cup of vinegar to a bowl, and add cucumbers and onions add salt and pepper to taste. Fill bowl with cold water until vegetables are covered. Let soak for a half hour.

Drain onions and cucumbers and add to bowl of greens. Toss and dress with a light vinaigrette. Top with grated hard cheese like Parmesan or Romano (or whatever else you have and/or like)

Monday, August 9, 2010

Salad under Foot

In my last post I wrote about growing salad greens in a bowl or pot, and what a wonderful source of fresh organic leaves it is. In my exuberance, I nearly forgot about the other source of greens I’ve been exploring this summer – the weeds growing like gangbusters while my grass dies due to drought. In general usage, weeds are a nuisance and not given much attention, and they’re usually found growing where you don’t want them to grow (or are trying to grow something else. On closer examination however, some of these weeds have uses.

The most common weeds in my area are dandelion, plantago (plantain), and burdock. All three of these plants are edible, at least in part, and also medicinal. Dandelion and Plantain provide greens for salads. Burdock leaves can be harvested in the spring, but this plant is usually valued for its roots, which are eaten like a vegetable. All parts of the dandelion are edible and have various uses, including the famous wine made from its flowers.

What makes these plants special is that they grow on their own. They require no effort on your part – in fact, just the opposite is true, as anyone who has ever tried to maintain a lawn knows, effort is required to keep them from growing. They are literally free food, there for the taking – well, there is a small cost – one has to get over the idea of harvesting weeds from their lawn or other public green spaces; one has to prepare themselves to eat something they’ve been taught to despise all their lives; and one has to be willing to accept the questioning looks and possible scorn of those who believe that food can only come from stores.

If it helps, many of these so called weeds used to be cultivated as crops, and in some places still are. Plantago, for example, is still widely grown in gardens around the world. Another ‘weed’, chicory, is widely grown in some places, and is somewhat famous as an additive or substitute for coffee in New Orleans. Here where I live, it simply grows wild, any place and every place it can, and it’s flowers are a scenic staple along the sides of roads. It should also be noted that in many upscale restaurants, salads of ‘wild greens’ are a featured item on the menu, and examining these expensive little piles a green that dandelions, rocket, chicory, and other ‘weeds’ are prominently present. Our ancestors used and relied on these crops, but now that knowledge has been largely, but not entirely lost.

If you’re interested, as a Luddite Apprentice, or simply looking for a free meal, the place to start is with a good field guide to edible and medicinal plants. Then spend some time outside comparing your weeds to what you find in the book. Remember that details are important here. There are many plants or radically different species that look very similar during various parts of the growth cycle, and the only way to distinguish a friendly plant from a dangerous one may be small things like the way the leaves grow from the stem. If you have any doubt, you should consult an expert, in person.

Monday, August 2, 2010

Salad Bowls

I like salad. I’m a firm believer that eating raw vegetables has health benefits that exceed the value of those same vegetables (or similar one) cooked. There’s some science behind my belief, but when it comes down to it, really, I eat them because I like them. Most of the salads I eat are based on a bed of greens – sometimes they’re nothing but a bed of greens, and if you like salad greens you’re probably aware that the younger they are, the better they taste.

Over this past winter, I experimented with sprouting. Sprouting is quick and easy, and I tried all different kinds, and subsequently bought a lot of seeds. As soon as spring kicked in though, and it did so early and with a vengeance this year, my thought and appetite returned to greens and for the most part I gave up sprouting. This, in spite of the fact that I don’t have a proper garden this year. I confess, I was buying those little plastic tubs of organic baby greens from the local grocery store, as well as some from my local farmer’s market when they were available.

Then I had one of those head –slapper moment. Why on earth was I buying greens that I could just as easily grow myself? Most of those sprouting seeds I’d bought were for greens and salad herbs. So I got out a large pot, filled it with dirt, some home-made compost (which I continue to make even though I don’t have a proper garden yet) and seeded it with sprouting seeds, added water, and about three weeks later, harvested a nice crop of mixed baby salad greens large enough to provide a dinner salad for three adults and two children.

After harvesting my first salad, I hand tilled the soil in the pot, and started a new batch. Also, because it was so good and so popular, I started two more pots, staggered by a couple days. Homegrown salad greens are now becoming a staple of our household diet. More pots and we could have fresh, organic salad everyday, with exactly the mix of greens we want.

The news just gets better and better though. First, there’s the cost: A bag of mixed green sprouting seeds costs about the same as one plastic tub of organic greens from the grocery store – I’ve grown seven salads from the first bag so far, and have at least enough for three more. Next there’s the environmental impact: no plastic tubs, no driving to the store, no importing our greens from California. This isn’t the 100-mile diet – this is the 100-foot diet. With seed-saving (by allowing a pot to grow, bolt, and grow to seed), the price and environmental impact could be reduced even further – to almost nothing. Finally, there’s the fact that by growing the salad in pots, once the weather turns cold, the bowls can be moved inside and continue to grow fresh greens through the winter (albeit a little more slowly. You can trick mother nature, but only within certain limits).

The grocery store owners probably disagree, but for my family and me, this is a win-win.