Planet Phoenix.PM

April 21, 2008

Brock's The Lack Thereof

TLT - 2008.01.02 - Events vs Actions in UI Code

Happy New Year!

I've been grappling with a concept for a long time now (years), and thought I'd put it down here to cast about for insight.

Here is one way to handle UI events:

$page->add_action(add_new => 'Add New Entry');
$page->display; # displays template, waits for input
$action = $page->get_action;
if($action eq 'add_new') {

Here is another:

$page->add_action('Add New Entry' => sub {
$page->display; # displays template, waits for input, runs callbacks

The first is quite imperative. Show the page. Give me the result. Examine the result. Act. The second is much more declarative. I declare that were such an action to occur, this is what you should execute.

The second is the way that Seaside handles things. I'm not quite sure why I'm reluctant to adopt this method... perhaps simply my lack of experience with this construct is to blame. I think it's some sort of voice in the back of my head that doesn't like it because it is a bit too much like desktop GUI callbacks. But why should that be a bad thing? It seems to work just fine for those applications.

I think I'm thinking about this too much.

April 21, 2008 10:21 AM

TLT - 2008.03.30 - Making Music

I love to create and build -- lately I've made a few songs! My friends like to play guitar and sing, and I play the harmonica and am learning guitar. So here are my recent creations (all with the help or in conjunction with friends):

Only the middle one has a recording posted so far, but I'll get recordings of the others sooner or later. More to come I hope :)

April 21, 2008 10:21 AM

Andrew Johnson's Transformed Planet

Children of Jihad: A Young American's Travels Among the Youth of the Middle East

I normally don't write a review of a book until I've finished it (which means I hardly ever write book reviews, because I've been so bad at finishing books lately), but I really wanted to post about how much I am enjoying Jared Cohen's Children of Jihad. While its subtitle, A Young American's Travels Among the Youth of the Middle East, sums up the book's subject nicely, it doesn't convey how truly insightful the book is. Cohen gained this insight by doing a seemingly simple thing: he listened to people. He sat with them, ate with them, and listened as they told him about their lives, their dreams, their aspirations, their problems. By doing this, he gained a much more complete, coherent picture of the Middle East than any of number of "political analysts" or cultural experts. This is a great book if you want to understand more about the true plight of people in the Muslim world. I'll write a more complete review when I finish the book.

Update: I've finished Children of Jihad, and I have to say my opinion hasn't changed; it's excellent. If you've ever asked yourself how millions of people in the Middle East can hate the entire Western world, this book reveals the truth: they don't. Cohen knows this because he went to the Middle East and talked to them. Even when some groups have grievances with America, they most often were with the US government and US policy than with Americans themselves. (Let's face it: who doesn't have a beef with the US government right now?) Based on Cohen's travels, the problem mostly lies with that small minority of people that do hate the West as a collective, for various reasons. Unfortunately, in several countries those hardline "Death to America" people are in control of their governments, keeping a stranglehold on power and thus being the most visible people in the news media. Not that "the media" is the sole reason for the distorted view that the Middle East and the West have of each other, but doesn't seem to have helped matters much. Reading Children of Jihad definitely gave me a better perspective on the Middle East and its people. Most importantly, it gave me this perspective by using the best source of all: the people themselves.

April 21, 2008 10:17 AM


I originally planned to post a list of my new year's resolutions, but especially now that January is mostly over, I think the following is a more healthy attitude to resolutions:

April 21, 2008 10:17 AM

Scott Walters and friends on Twitter

scrottie: @tweetscan Please add search of tweet plus tweeter's location, w/ feeds. Eg, Perl in Phoenix, for

scrottie: @tweetscan Please add search of tweet plus tweeter's location, w/ feeds. Eg, Perl in Phoenix, for

April 21, 2008 08:50 AM

April 20, 2008

Scott Walters and friends on Twitter

scrottie: NG: bug doesn't seem to be in wincheck. Diffed new vs antique. Now to check rest of slots logic. Argh.

scrottie: NG: bug doesn't seem to be in wincheck. Diffed new vs antique. Now to check rest of slots logic. Argh.

April 20, 2008 08:09 PM

Phoenix Ruby User's Group

[ANN] April 23: Refactor Phoenix Meeting

[ANN] April Refactor Phoenix Meeting
April Refactor Phoenix Meeting
Time: Social stuff: 6:15pm
We're at Boulders on B'way!
Presentation stuff: 7:00pm
(We'll be bringing our video projector)
Place: Boulders on Broadway
530 W Broadway Road
Tempe, AZ 85282

by (James Britt) at April 20, 2008 03:29 AM

April 18, 2008

Perl Community News

PAUSE gets a CAcert signed certificate tomorrow

The Perl Authors Upload Server (PAUSE) has had a self signed cert for it's SSL stuff. Tomorrow Andreas is installing a cert signed by CAcert, a certificate authority. You might need to install the CAcert root certificate so your browser recognizes the authority and you don't get a warning, but you don't strictly need it to access PAUSE.

Read more of this story at use Perl.

by brian_d_foy (posted by brian_d_foy) at April 18, 2008 02:37 PM

Perl 6 Design Minutes for 16 April 2008

The Perl 6 design team met by phone on 16 April 2008. Larry, Allison, Jerry, Will, Nicholas, Jesse, and chromatic attended.

Read more of this story at use Perl.

by chromatic (posted by KM) at April 18, 2008 01:24 PM

April 17, 2008

Perl Community News

mod_perl-2.0.4 finally, it's here.

frankie_guasch writes "Finally, it's here and it works with Perl 5.10!"

Read more of this story at use Perl.

by brian_d_foy at April 17, 2008 03:10 PM

YAPC::SA::2008, April 17-19 in Porto Alegre

worm writes "Brasil-PM and SPB are organizing the Yet Another Perl Conference South America (YAPC::SA::2008), in April 17 thru April 19, 2008. The meeting will take place during the 9th International Free Software Forum (FISL 9.0) @ Porto Alegre (RS) — Brazil, which is coordinated by Free Software Association, and will take place at PUC-RS (Catholic University) Exposition Center. Randal Schwartz will be a keynote speaker. Some live video streamming will be availiable at (pt_BR). Centro de Exposições PUC-RSAv. Ipiranga, 6681Bairro PartenonPorto Alegre — RSBrazil YAPC::SA::2008 webpage (pt_BR)FISL 9.0 webpage (pt_BR, es, en)"

Read more of this story at use Perl.

by brian_d_foy at April 17, 2008 02:10 PM

YAPC::NA 2008 Registration Open

With exactly two months remaining until YAPC::NA 2008, we are officially opening the payment system for registration to the conference. From now through the end of April, an early-bird price of $85 USD is available for attendees. After that, the registration cost goes up to $100 USD, so if you are looking to save some money on registration, book now! Of course, we are also making the full-price registration available for those who are feeling generous. Some things to note: If you are a speaker, DO NOT pay for the conference unless you really really want to. We will be passing out a coupon code sometime in the next few weeks so that you can properly register. Also, if you are wanting to book on-campus accommodations you'll have to wait a week or so to reserve those; however, you can register and pay for the conference now to take advantage of early bird pricing. The housing registration system will be available very soon. The maintainers of ACT (the system we use to manage the conference) are working hard to create a more robust purchasing component of ACT that will make booking housing easier this year. A lot of this work will be done at next week's ACT hack-a-thon. Please join all of the ACT Hackers in making ACT an even better system for managing YAPC's around the world. And finally, be sure to get the word out. We want to make this one of the best YAPC::NA's to date and to do that we need attendees! Tell your friends. Tell your boss. Tell your local PM group or LUG. YAPC::NA 2008 is almost here!

Read more of this story at use Perl.

by jmcada (posted by brian_d_foy) at April 17, 2008 12:12 PM Tech Meeting @, this Thursday!

naterajj writes " will be having a technical meeting this Thursday the 17th of April at the offices in downtown Los Angeles. Theron Stanford will be presenting Module Versioning with Apache 1 and mod_perl 1, AKA how to load and use different versions of the same module. For details please visit"

Read more of this story at use Perl.

by brian_d_foy at April 17, 2008 12:10 PM

YAPC::NA 2008 Registration Open

With exactly two months remaining until YAPC::NA 2008, we are officially opening the payment system for registration to the conference. From now through the end of April, an early-bird price of $85 USD is available for attendees. After that, the registration cost goes up to $100 USD, so if you are looking to save some money on registration, book now! Of course, we are also making the full-price registration available for those who are feeling generous. Some things to note: If you are a speaker, DO NOT pay for the conference unless you really really want to. We will be passing out a coupon code sometime in the next few weeks so that you can properly register. Also, if you are wanting to book on-campus accommodations you'll have to wait a week or so to reserve those; however, you can register and pay for the conference now to take advantage of early bird pricing. The housing registration system will be available very soon. The maintainers of ACT (the system we use to manage the conference) are working hard to create a more robust purchasing component of ACT that will make booking housing easier this year. A lot of this work will be done at next week's ACT hack-a-thon. Please join all of the ACT Hackers in making ACT an even better system for managing YAPC's around the world. And finally, be sure to get the word out. We want to make this one of the best YAPC::NA's to date and to do that we need attendees! Tell your friends. Tell your boss. Tell your local PM group or LUG. YAPC::NA 2008 is almost here!

Read more of this story at use Perl.

by jmcada (posted by jmcada) at April 17, 2008 02:28 AM

April 16, 2008

Perl Community News

Act Hackathon planned next week

YAPC::Asia 2008 organizers would like to thank Eric Cholet, the author of ACT for the great conference organizing software that powers most of YAPCs and Perl Workshops. To show the appreciation in the hacker's way, I'm flying to Paris, France next weekend (April 24-28) funded by YAPC::Asia possible profit, to work on Act feature enhancement. We plan to work on these things because we want them for YAPC::Asia: * OpenID provider support* Better Japanese names display (i18n)* Embed videos and slides (YouTube, Google Video etc.) in talks* Personal Scheduling (Who is attending to which talks)* Online check-in API (Who actually showed up when)* Promotional code / coupon for discounted payments We (at least, I) prioritize implementing these because the trip is funded by YAPC::Asia but if there's anything you think is missing for Act, I'd love to hear. Remote participation (#act on during the weekend) would be welcome too!

Read more of this story at use Perl.

by miyagawa (posted by brian_d_foy) at April 16, 2008 01:29 PM

Parrot 0.6.1 "Bird of Paradise" Released

particle writes "Aloha! On behalf of the Parrot team, I'm proud to announce Parrot 0.6.1 "Bird of Paradise." Parrot ( is a virtual machine aimed at running all dynamic languages. Parrot 0.6.1 can be obtained via CPAN (soon), or follow the download instructions at For those who would like to develop on Parrot, or help develop Parrot itself, we recommend using Subversion or SVK on the source code repository to get the latest and best Parrot code.

Read more of this story at use Perl.

by davorg at April 16, 2008 10:13 AM

April 15, 2008

Perl Community News

NPW 2008 - Website online and registration is open

The ACT website for Nordic Perl Workshop 2008 is now online and registration is open. You need an ACT account to register but if you've been to any YAPC or local workshop the past few years chances are good that you already have one. If not, signing up for one is easy peasy. Oh, and don't forget to submit talk proposals if you want to give a presentation! Hope to see you in Stockholm,Claes Jakobsson and the NPW2008 organizer team

Read more of this story at use Perl.

by claes (posted by grinder) at April 15, 2008 11:04 PM

Phoenix Ruby User's Group

April Phoenix Rails Meeting

Late notice as it has kind of snuck up on me with all travel and
madness that has been going on.
Tuesday, April 15, 2008 7:00 PM (TODAY)
Integrum Technologies
290 East El Prado Court
Chandler, Arizona 85225
33.3389, -111.835
Overview of Mountain West Ruby. Potential presentation by Chris

by (Derek Neighbors) at April 15, 2008 09:19 PM

April 12, 2008

Scott Walters on

To maintences programmers, all languages are the same

Yeah, it's a bold assertion because it flies in the face of so many gripes from so many people for so long. Take the dailywtf. "I'm maintaining this horrible ASP programming" or "the guy who worked here wrote this in Perl". I wrote earlier about about the rot of Phoenix.PM as the entire congregation defected leaving only maintenance programmers.

These are programs that have been driven into the ground. Fear of refactoring, having the wrong people calling the shots, and the wrong priorities turned them into dung. This can happy in any language. Java has now been around long enough that programmers written in Java have descent into chaos. I've seen them. I've been hired to hire Indian programmers and I insisted on reviewing code before hiring a firm. All of them had terrible voodoo chicken bones code that in 100 lines foreshadowed million line voodoo code projects. And it was all in Java.

A program in any language in a historically badly run project evolves to somewhere past point of insanity. No self respecting programmer will try to continue adding features at this point, accelerating the descent when companies hire novices to try to extend the dung heap. The good programmers are put on maintenance; they won't do anything else, and keeping the thing running is a challenge.

But you know this. What matters is that the economics completely change when the project goes to hell.

The expectations for a maintenance programmer are calibrated to the difficulty of the project. None of the schedules, deadlines, SLOC, milestones, or other productivity metrics apply. There are easy bugs and hard bugs and sometimes the hard bugs live for a very long time time. Doesn't matter if it's COBOL, Perl, Java, ASP, or what. The maintenance programmer proved himself over and over again to his employer, so the lack of metrics don't matter. He's woken up at insane hours, worked late, and traced down insanely involved problems keeping the whole thing wedge. But... the language doesn't matter. The difficulty of fixing problems doesn't vary. It's a matter of definition: the project grew in whatever language until its unmaintainable, where ever that is for the language, and then the good programmers are just keeping the boat afloat.

Phoenix.PM is almost entirely composed of maintenance programmers. Programmers who like to write new code left a long time ago. And that was in other languages, for the most part: Java, C++, Python, Ruby...

But as a maintenance programmer, you're not writing code. And if you're proficient in a language, you can read the code. The idiom doesn't matter. The idiom doesn't make you more or less efficient at hunting action-at-a-distance problems. The hard bugs aren't in the expressions, they're in the interactions between parts of the system... or the lack of parts of the system. The fact that Perl's syntax can be confusing is completely irrelevant to maintenance programmers. Those are the easiest bugs they encounter in their job.

So, the conclusion that could easily be drawn from reading the dailywtf... that Perl is horrible, because there's so much horrible Perl out there... is a fallacy. Or that ASP is horrible because there's so much bad ASP out there... in both cases, I suspect the real problem is that programs overgrew their design, weren't redesigned and refactored, and the good programmers were forced into maintenance by business need.

If you're a good programmer who likes writing code, you're forced away from entire languages. You're suddenly unable to touch Perl, as an employee, because the only jobs are maintenance or else adding features to a completely fucked system, which you won't do. A few dozen companies mismanaging projects ruined the market for you.

And, conversely, PR won't do any good for Perl. The only thing to do is... start new projects. CPAN modules are good, but "Web 2.0", or "dot com" or whatever businesses. Make stuff in it that people care about. Create new projects that'll require maintenance, perhaps, and might be easy to maintain, but regardless *aren't* maintenance gigs right now.


by scrottie at April 12, 2008 07:05 AM

April 06, 2008

Scott Walters on

what twitter-land is saying about Perl

I use Twitter. I like being able to tell my 1.5 friends where I'm at and having him maybe drop in on me. (God, I suck.) I found and decided to search for mention of Perl, possibly to follow Perl users in the area, if I could find them. Here's a sample of what I got:

        ceri : remembering why I hate Perl

        PoBK : *Scream* This is why I HATE perl. There are no less, that 17 modules on CPAN to query domain Whois data! Seventeen!!!1oneeleven

ironsoap : We now come to the part of the show where I sing a little song about how much I hate Perl and its evil sibling Unicode.

calico : @mrballistic - re: DB-backed sites -yep, I worked with a company doing it with PERL backends and an Access DB back then - painful

neilfws : think the perl love affair might be over

jonsagara : I can't believe there are Web applications still written in Perl.

baseonmars : i want to stab perl in the head and make it's eyes bleed.

ba78 : "No one working on ES4 wants it to be like Perl 6." :)

chastell : ‘Perl style guide’. ahahaha *snort*.

neilfws : enough perl horror for today; need to go home and calm down

acdha : replacing hackish Perl with half as much better Python feels *so* good

etherjammer : @fadeaccompli, deep sympathy. - just occurred to me - are you going to have to learn Perl?

eosadler : @amndw2 Why perl? Consider trying ruby. "Learn to Program" by Chris Pine is a great intro book that I've been teaching from.

SlexAxton : Both of these return true in Perl: "Perl" eq "Perl" and "Perl" == "Dumb"

fortunetweet : There are worse things than Perl....ASP comes to mind

breathoffire : The whole "theres more than one way to do it" concept of perl can be tedious.

noahk17 : @bluesharpie5 So far everyone has said job #2. Perl is dead anyways. I'll be sure to tweet as soon as I hear something!!

mpstaton : @dweekly perl? isn't that so 2004?

offwhitemke : Primary experience example of less code is not better, Perl. ... to be fair, there were some positives mixed in, but the overwhelming negativity caught me be surprise. I'm trying not to be in denial about my language choices. This can't go without comment. Well, it could. My reply:

Perl haters: shut the hell up until you learn awk and sed, understand what they do that other languages don't, and why Perl took from them.

I know; boring old topic. Image changes like adding strong typing (, publishing Best Practices, CPANTS, aren't going to change perceptions. How do we tell a whole generation of people first learning to program on purely procedural/OO languages what the *point* of Perl's existence is?


by scrottie at April 06, 2008 08:34 AM

March 31, 2008

PhxBSD User's Group

PhxBUG April Fool's Meeting

2008/04/01 - 7:00pm
2008/04/01 - 9:00pm
2008/04/01 - 7:00pm
2008/04/01 - 9:00pm
ASU GIOS [map]

This month's presentation will be Firewalling with PF, from Basics to Bastion Host. The packet filter PF is available on OpenBSD, FreeBSD and NetBSD and features a simple, clean syntax which makes it accessible to home users, though it's powerful enough for enterprise applications.

Location Details:

GIOS is on the southwest corner of University Drive and College Avenue, directly across from the Newman Center and next to a Methodist church parking lot. There is free parking on 7th Street a block north of our location but they charge you to park in the Methodist church lot. It is similar to before where the entry doors are locked at 5pm and the conference room #308 is in sight of the door. You must take the elevator to the third floor.

by dwc at March 31, 2008 08:36 PM

March 29, 2008

Scott Walters on

How to be productive at work, not yer papa's version

I spent a few years at web shops and watched new programmers come in. They fancied themselves great programmers; being young, they had a lot to prove. They didn't like how we did things. It was so stodgy and dated. But rather than show us how it's done, they... did nothing. Overwhelmingly, they crawled into a shell, where they maintained at least the imagined possibility that they're a great programmer trapped in a terrible company. My take on this is that they're afraid that if they ever tried to spread their wings, they might fall, then they'd be really embarrassed. Or it would take a few tries to get it right. Or it would be more difficult than they imagined. Or they'd get unlucky and be judged by their one failure.

Advice: get over yourself. Good programmers know what it's like starting out, and we're not keeping score. We don't force you into a strict regiment because we have faith that you'll eventually grow into the role, trial and error included. If you don't like the company you signed on to, then do this: really spread your wings and risk making mistakes, getting in over your head, breaking things, failing to implement designs, and so on. Make your mistakes there, learn your limits, and grow, and do it before you find yourself in another company, in a position of having to prove yourself. Were you defeated by some small, backwards, ASP using company? Why? Try to conquer them. Even if you fail, you'll learn countless valuable lessons in trying to take over companies -- and by take over, I mean steering technology in a positive direction, one of your own choosing.

Work boring you out of your mind? Selling widgets not engaging? Don't pretend. Ignore the hype about agile... widget selling. They probably have a lot of turn over from unmotivated programmers. Rather than trying to strike a compromise with their programmers making staying worth their while, they treat the programmers like they're stupid and try to convince them that selling widgets is the coolest thing out there. Rebel. If you're really good, you can spend half of your time working on fun projects and still your employer will still be far better off than if they hired some twit who didn't have a good work ethic anyway. Spend your time improving Perl. Releasing code as open source used internally. Generalizing processes so your competition can use the code. Make Perl faster so your employer can buy less hardware, and share the wealth with everyone. Re-interpret your own job title. Yes, companies hate this, but it's a fallacy for them to imagine that a brilliant programmer could be tricked or coerced into spending that same energy and passion... selling widgets.

Learn what all of the other groups do. Learn the reasons they don't like your group, which almost certainly exist even though programmers are shy about admitting them. Be a judas. Make connections there. Connections are good.

Spend a certain amount of time trying absolutely crazy schemes. Write code to heuristically identify in-lineable method calls, cross reference it with DProf output of which are frequently called, and inline them with B::Generate or Code::Splice (created for that purpose). It might not work. It might be absolutely terrible. But doing something hard the wrong way will almost certainly illuminate you as to the (or a more) correct way to think about the problem. And if you get stuck in the mindset that all attempts must be successful, you'll only go after the low hanging fruit, and the project will slowly collapse under its own weight. None of the hard refactors will be done. No serious overhauls will be made. No really new ideas will be incorporated. I've worked in Perl shops that were downright xenophobic about ideas from other camps -- and I attribute it to this "all programmer time must be accounted for, all projects must succeed" induced shell shock. Under a fascist regime, all new ideas look like bad ideas.

After you fail at something hard, or something crazy, you'll have motivation -- saving face -- to work on the boring stuff.

Working on the boring stuff isn't all bad either. That's often when the patterns with a general solution appear.

A general pattern to this blog post is companies who hire programmers at one level and expect them to work at exactly that level and stay at that level, like a cog. That's not good for you or them.

Here's a fun crazy thing: write parts of the system in a completely different language. People whine entirely too much about splatterings of different languages. If the Google programmers are so fucking smart, how come they can only cope with having Python, Java, and C++ around, and nothing else? Pussies.

Work other places than in your cube. To get the juices flowing, you have to break the routine. Sure, they don't let you on the 802.11, and if you put an AP under your desk, they'd hunt it down and destroy it and you with it. But they're probably not sniffing for Ricochet. Get some old STAR modems and set up your own wireless network. Or packet radio. Or any number of other technologies. Work in the company cafeteria. Pretend it's a coffee shop.

Get your work done in a marathon session at the start of the week then take the rest of the week off. Stop in on Friday to catch up on email and get a jump on planning the next week. I don't mean work really hard from 9-5 -- I mean pull allnighters.

Make friends with people who work for your competition. Your employer will hate this too, but it's very useful to break out of the "We The Company" mindset. I'm not sure why they pump that stuff on everyone as it's fundamentally destructive. Having some perspective on it all will help keep the stupidity from bogging you down. Perspective wards off depression.

Every now and then, get your coworkers involved in a little fragfest.

When your CTO does something stupid, blog publically (but anonymously if necessary) about how stupid the thing he/she did was. Don't think that just because they report directly to the share holders, they're immune from responsibility to the shareholders (fuck all why anyone would take that leap of logic anyway).

Find people you like there. The job will end for one of you and before you leave or he/she leaves, get to know them. Your future is far more wrapped up in good people than some mythical, imagined "good job".


by scrottie at March 29, 2008 02:27 AM

March 26, 2008

Scott Walters on

Offtopic: Women.

Okay, guys. Letting things go and trying to smooth things over doesn't work. It's destructive. Counter-productive. No, don't snap the first time the women does something you don't like or stats nagging or bitching. And this isn't women vs. men. Here's the crux of the problem: when one gender starts pretending like bullshit isn't bullshit, they've sent an open invitation for more bullshit. They'll find that they're *always* wrong because the other gender is making the rules. It seriously sucked to be a women in the 1950's. Men made the rules, and the rules were arbitrary, set after the fact or made up on the spot (like Calvin Ball), and interpreted according to whim and agenda. If the 1950s male felt the female was being ditzy, absent, irresponsible, imprudent, or otherwise "unladylike", he'd become angry, solemn, reproachful, or some combination. There was no arguing it. The female simply had to wait for the wrath to pass and some figurative second chance to be extended.

Then we had women's lib, equality in the workplace, the divorce revolution, female sexual revolution, and so on. This is not all bad, no question. But some myths were created as waste during the process. Here's the king of them: that if a women feels injured, that the male did something wrong. No. Bullshit. People get hurt feelings for bad reasons perfectly often. Children have episodes where they act as if adults have some conspiracy against them and that's why they don't get the cookie they want or the toy they want. With a stern hand (not giving in at the threat of a tantrum), children eventually learn that them not getting the toy or food has to do with far more complicated affairs -- money, health, and just general education in emotional maturity, especially in not being manipulative and demanding. Of course, not all children have the benefit of this education. Some go through highschool or college without anyone every saying "no" to them. And for some bizarre reason, probably also a waste product of change in the past 50 years, the conceited, manipulative, self-important, obnoxious, demanding, pouting, crying, whining, spoiled female is currently in demand. What the hell? There are countless shades of grey aside from the extreme case and it would be impossible to enumerate all of them.

Now, men's desires are treated as illogical and childish -- the desire to own motorsports equipment, to hang out with their friends (play poker, or SJG Illuminati, or just drink beer). It's indulged with condescension. The state of the house is dictated by the female, with which furniture and decoration is desplayed vetted by them, and any evidence of a male presence virtually removed. The whole house is girlified. Video games are hidden. Behavior, action, and decision making is run by the female -- at least in the last several relationships I've seen -- with a few notable exceptions. All all, with the few exceptions, were extremely unhappy. The female was insecure because she had no way of knowing that the male's love was genuine or it was a duty/act, since it's largely forced and extracted. The male can't behave as he pleases in cases where it just doesn't fucking matter, but instead has to justify everything and get permission from a gender that can't understand male stuff (just as men can't be expected to understand make up, barbie dolls, and the elaborate social system of women). And everyone is unhappy.

I'm not encouraging a revolt, or a reversal. It comes down to this: say "no" sometimes, as it *makes sense*, not to push weight around. If the women loves you, she'll deal with it. If she starts pulling crap, call her on it. Don't pretend like bullshit isn't bullshit. If she just can't express her case, give her all of the chances to explain it and give her indulgence for strange behavior in the interium. But when she's bitching at you for something stupid, say no. Leave. Let her stew. Let *her* decide if *she's* willing to love you and *herself* look past behavior she can't understand. When she demonizes behavior she doesn't understand, explain as best as you reasonably can, but don't let her inability to understand dominate your behavior. It's possible to be loving, not do that bitch-ho crap, but not be a complete pussy either. Stand up for yourself. The women are doing it and the men haven't walked away. Each relationship is still going to have a leader, but it should be a kind, and not insecure, leader.

I write this because people I'm close to are suffering.


by scrottie at March 26, 2008 02:41 AM

March 15, 2008

Scott Walters on

Still pondering hosting: open SSILinux cluster?

Still pondering hosting. There's a colo in town that charges less than DSL, at $45/mo and I want to have some fun with it. So here's my latest...

Colocate one SSILinux x86 machine there, and through a re-seller deal or something, encourage others to send x86 machines to me to get
installed with SSILinux and added to the cluster.

If you're not familiar with SSILinux, it's a hacked up kernel that makes a bunch of machines on a LAN look like one big machine lots of RAM, CPUs, and storage. See also OpenMOSIX.

For your $50/month (or whatever), your machine gets added to the cluster, and you get a permissive account on the cluster (probably with some restricted sudo access). So, rather than just having a lame machine colocated somewhere, you have the shared resources of a supercomputer! (Exclamation point, exclamation point, exclamation point.)

I was toying with doing yet another free shells service, but I can't break away from the fundamental problems of not wanting to lock it way down (dissallow CGI, etc) but not having enough CPU resources in one machine to give out permissive shells.

Groups of users fearful of "the slashdot effect" (also known as the reddit effect, and so on) could band together. Not everyone's site could be hit at the same time, but the spare capacity could see one or two people through the storm.

And it's anti-isolationist. Sure, it's nice being root, but it's also nice having users (which I currently lack, almost entirely). They put mp3s of little local bands in their folders and you can swipe copies. You can help them with their code, and they can easily help you with yours, and everyone just generally checks out what everyone else is doing. Life is a beach =)

Some "community policy" stuff would have to be put into place. If anyone is using more than their fair share and doing so consistently (trying to crack passwords, whatever), they get booted off the cluster after a warning. If they're merely using more resources than they're contributing, perhaps they can be asked to add another machine or upgrade their existing one.

Unlike existing shared hosts, daemons would be allowed and expected. Enough IPs could be bought and the public interface aliased such that everyone who wanted to run a stupid IRC server could.

I got this 1.6ghz Celeron machine here to colocate running FreeBSD (having given up on the stability of free Unices on RISC hardware and having given up for the time being on Solaris and IRIX), but maybe I'll really slide back that way I don't want to go and run Linux on x86 (yech, yeck) just because clusters are cool.

Thoughts? Would *you* give up (most) root privileges to have your dedicated host be part of a growing supercomputer?


by scrottie at March 15, 2008 09:27 AM

scrottie's guide to Phoenix

In Phoenix for about ten years, I keep thinking I should do a review site. Phoenix is mostly a wasteland of endlessly repeating chain stores, but after long, hard, careful searching, a few gems have emerged. I'm cranky, and my annoyance at the sucky drives me to find the good, and this is the result of my effort. is another excellent list (hey, I've done almost all of those things!) that focuses more on adventure and living life in AZ to the fullest. In no particular order:

Doctor: Your Neighborhood Doctor, Dr. Garcia, 6345 E. Bell Rd. Walk-in. I hate the whole "doctor's office" deal. Waiting an hour for a minute of someone's time who doesn't listen and then abruptly vanishes without warning is pointless and humiliating. I really can't explain the Your Neighborhood Doctor other than to say that they just don't do that, pointedly so. It's like the story where you walk into the executive office to have a confrontation with the CEO, find yourself confronted with an intelligent, sympathetic, thoughtful person, who turns out to be the janitor -- except in this case, he isn't the janitor.

Optometrist: Horowitz Vision Center. East of the Fiesta Mall in Mesa in the same complex as a chain optometrist. Got that small town business feel. Again, she loves her work enough that she broke out of the existing system so she could do it right.

Pubs with Good Beer: Popago Brewing: Laid back, local, 20 good beers on tap, great selection in the (large) cooler including Belgian styles, a local specialty. Four Peaks, north or south location: Good atmosphere, pretty good microbrew beer. The Hopknot is probably the most exceptional of the lineup. Bandersnatch was on the top of this list except the Tempe goons ran them off in their quest to McDonands-ize downtown Tempe. Fuckers. The Roosevelt: Seriously dig their Rogue fetish. Casey Moore's, just for it's college alternative vibe.

Food: Indian, just SE of University and Rural (aka Scottsdale). Also, the Copper Kettle. Aladan's for lunch, SE corner of 92th and Via Linda. Four Peaks has fantastic bar food too, bordering on gourmet but staying in the pub genre. Que Bueno in Fountain Hills has the best salsa and damn fine margaritas. Sakura Inn in FH is cute and the sushi chef sweet and genuine, but the new owner is extremely obnoxious and pushy. Sushi Ko on 92nd and Shea rocks for authentic and is better than the overpriced "experience" places. Nello's, two locations. Some good beers (used to have a lot more) and good pizza pies -- specialty stuff.

Internet: FastQ. Grew out of the local Apple user's group, but is a Linux, Solaris shop mostly. Everyone answers the phone there, and everyone you talk to knows their shit. Dedicated IPs are cheap and the ToS is permissive, and they don't screw with your traffic or bandwidth.

Coffee: This is a real flash point with me. Current set is Mill's End Cafe (on Mill), Three Roots (also on Mill but further down) for working, but the extremely boisterous and bizarrely ignorant college students frequent Three Roots and families with screaming kids and organizations composed entirely of fat old men frequent Mill's End. Inza Coffee NW corner of the 101 and Shea in Scottsdale: every "for here" latte order gets you coffee art! Open mics, music, art exhibits, free dance lessons, homemade food, outlets everywhere. Also, loud, obnoxious, shouting sales men on cell phones during the day. Bleah.

I'm sure I'll think of more later (and edit them in)...


by scrottie at March 15, 2008 06:37 AM

March 14, 2008

Scott Walters on

Go save your wishlist *now* before they delete it

Two emails came in, back to back, while I was out on errands: that was going to delete items from my wishlist unless I updated it; and that they deleted "old" items from my wishlist, and in the future, I should "keep it up to date". FUCK!!!

Looking at it, sure enough, it's down a few pages from what it was.

Every book in there, I seriously intended to *probably* buy, and even though I had twelve pages of books on the wishlist, I'd already bought on the order of 30 or 40 books from them. Every few months I go on a bender and drop $100 there. It wasn't as if part of my wishlist was "old"... what I bought off my wishlist when I went on a spending spree there had nothing to do with when I added the book to the wishlist.

So, now, things expire off your wishlist, and they have little clocks that run down, and you can add more time to them, manually. If you don't ask for more time every few months, it gets deleted. They implemented this feature and didn't warn me until after some of the timers already went off. Again, FUCK!

It's really my own fault for trusting a site with data, but was my master list of books to buy. Along time ago, I dabbled with a proxy service that would show you your entire wishlist on one page by scraping their site, but they changed their interface so that you can't view other people's wishlists, so it got scrapped. I had output from it so I had a snapshot of a very early wishlist. I spent the last three hours piecing together what I could of what was missing, printing out the raw wishlist pages (in fear that more timers would go off in the next few minutes), opening each page, and cutting and pasting book titles and URLs into a text file, and then finally, deleting the entire wish list so I wouldn't be getting these horrific emails that my data is being deleted.

Another reason they cited for deleting from my wishlist: their preference engine does a better job finding things for me when the wishlist has fewer things on it. So, wait, they want me to buy more stuff, but stuff that they suggest, rather than the stuff that I actually *wanted* to buy, so badly that they'll delete my wishlist items to better find things to spam me with? Bloody hell. Morons!

Maybe most of their customers add vast amounts of things to their wishlists and never look at them and they're losing tons of money buying storage space, but honestly, that's hard to imagine. Each thing in my wishlist is one record in a hinge table: it references the item, and it references me. That's two 32 bit integers. Maybe a datestamp too. So my whole wishlist was a few K, even though it was 12 pages long. Is risking pissing off their hard-core customers worth reclaiming these few K from their deadbeat customers? Wouldn't it have been nearly as effective to send people an email asking them if they really want their wishlist, giving them 30 days notice, rather than ten minutes?

eBay (the owner of is a great big monopoly and it treats its customers like shit. So, I'm now in the market for another online used book store/market. Suggestions?


by scrottie at March 14, 2008 10:26 AM

March 08, 2008

Scott Walters on

MCS (certain chemicals make me stupid) update

An MCS Yahoo! group stumbled across an old post in here where I talked mostly about my earlier experience being extremely sensitive to certain common pesticides (so sensitive that my autoimmune system invades my brain -- google "MCS brainfog" if you want some color of what this looks like).

Here's a quick update, meant for other people with this problem (and it happens with environmentally persistent chemicals that build up in your body and in animal fat in your food other than pesticide) and for people who are just hoping I'm doing better now days.

I'm not suffering day by day, which is *huge*. I can't tell you. But I'm still in a precarious place. I'm basically dependent on telecommute work, and I have too much debt from too much time out of work to be comfortable and not enough backup plan in place (if I suddenly have to move to escape this problem, if the job goes away, etc). So anxiety still runs very high. I cope with it in various productive and non-productive ways -- often it just overcomes me. I'm worse the wear, pretty burnt out on humanity and generally war tattered. But, again, day by day, it's been good lately. I had a neighbor for a long time who didn't consider spraying for the ants eating the grease on his BBQ every day to actually be spraying and wouldn't stop because he wasn't really spraying (I'm not sure what he considered spraying). And sometimes a neighbor would try to kill all of their weeds (so I spent a lot of time pulling everyone's weeds for them, which made me feel like an ass, as well as feel bullied). And sometimes they'd go on a termite scare and have a company in with a large truck and a 5hp gas pump to dump hundreds of gallons of pesticide on their yard just down the road, triggering an emergency multi-week trip to visit a friend, the first few days of which, I was pretty boring. I often cope with pesticide exposure by drinking heavily. It's diuretic, causing my liver to just pee out a bunch of liquid, flushing the system that way (assuming I'm also adding lots of water), it takes the edge off, people expect drunk people to be/act stupid so it's camouflage, it gets me out of the house or away, and heck, I might as well.

Advice to anyone with this problem: GET OUT. Put your stuff in storage with a family member or friend, tell them you're not feeling well, you don't think it's mental, but you can't explain it yet but you need fresh air. Get a tent. Go camping for a week to clear your head. Make a plan. Ask for help. People *will* help you -- not all or many of them, but don't be shy about asking craigslist for a pesticide free place to live. Don't be shy about telling people who are willing to rent you a room that because even the actions of neighbors affect you, it still might not work out, and you're very sorry and hope it does, and very grateful. If it doesn't work out, repeat: camp again, do something else. Don't camp around other people. Find the state parks in your area where you can camp. Find a laundromat and wash your clothes and everything of yours you can before you go. Don't try to bring things with you that are covered in pesticide or perfume or whatever your trigger is. Don't be ashamed to be a "bum" while that's what you have to do. There are lots of people who, for various reasons, just loaf for a while. Maybe they went through something traumatic, like a war, or just have emotional stuff they need to sort out. Don't be afraid to share their company. Use hostels. Use the strange resources out there. Don't be shy. You *have to* so you might as well stop dragging your feet. If you do what you have to do, you'll get out of the brainfog and be able to think clearly. Let your family and person obligations go if you have to. People will understand, and you'll get your life back when you get your living and working situation going again. If you have to beg, ask people to feed you, not give you money. Don't be shy about that either. Hitchhike. Ride Greyhound. You can always get off Greyhound. Use Tour Mexico. Get away from the problem.


by scrottie at March 08, 2008 01:41 AM

March 07, 2008

Scott Walters on

Catalyst Book Review... that I need to write

I promised Packet Publishing ( I'd do a review of jrockway's _Catalyst Web Framework_.

Packet seems to be sort of a boutique publisher who writes about whatever topic they have available passionate, knowledgeable authors. I wanted to do this review because the company that published me, APress!, has a similar mojo going and I believe in it. Its the opposite of the "everyone and their dog" approach.

And I recognize the name. I even met jrockway once, at YAPC. All in all, people who already have a name tend to write better books.

I had an ulterior motive too: I wanted to force myself to sit down and actually learn this Web framework stuff that's all the rage. Like a movie that gets played on TV over and over, I keep seeing bits and pieces of it, and wanted the whole story once all the way through.

I'm having a hard time writing the review. I read the introduction and was ready to write exuberant praise. The rest isn't exactly bad -- in fact it's probably also great -- but it really served to reinforce the bias I went in with.

The writing was great, and the production was solid. It's pragmatic; it briefly (one paragraph) lists advantages of over CGI and then doesn't touch that horse again. Talking about the "over 190 plugins" for common tasks like "config file parsers, logging tools, email, caching, user authentication and authorization, crypto, internationalization, localization, browser detection, even virus scanning", giving examples of building sites in a modular fashion, and so on. Lesser books walk you through; this one guides you. He stops to explain what the various files in the distro are and generally answers the question, "what's that?".

But when the XML, Perl, TT, DDL, XHTML, and configuration for the ORM all got spilled all over the pages, I cringed. As someone wrote, "an ill-assorted collection of poorly matching parts, forming a distressing whole". Granted, he holds off on the DDL/ORM till the second or so example. He talks about how everything is pluggable and you can use whichever modules you want, but it's clear that no matter how you slice it, but in the end, you get a fire truck. Sure, it's powerful, but it's awkward, bulky, difficult to manage, and complex. And while versatile, well suited to very few actual tasks. Yeah, it does a bunch of things for you, but it adds just as many tasks. The "ORMs are Vietnam" essay comes to mind. Seemingly, web frameworks are also Vietnam -- appealing at first, promising at the onset, increasingly difficult to complete the task, but far too complex to take to conclusion.

Don't get me wrong. I'm a fan of domain specific languages. Imagine HTML didn't exist. The Web would work like existing GUI toolkits: under software control, by long series of method calls setting values and building containers and adding things to containers, code would manually create the hierarchical structure of the display document. The code to construct the document would be twenty times larger than the document itself. It would suck.

I just have to imagine that DSLs would combine a lot more effectively. Web::Scraper combined HTML::TreeBuilder, some XPath module, and a few things in a stroke of brilliance such that you need to use none of the modules to leverage most of the effect of all of them. In Catalyst, you have to learn all of the modules to leverage the utility of any of them.

Through the introduction, the bearing on each thing on every other thing was clear. By the time we got into actually modifying the generated code, I was struggling to keep up with the web of relationships of what calls what and what identified is used by what to reference what in what scope. What matches? What depends on what? If I were writing this book, I'd have made a hairy ass diagram to illustrate the inter dependencies. Glossing over it, over simplifying it, sweeping it under the rug... makes it seem easier than it is... until you try to do it yourself.

As it is, things have not changed considerably since Zope.

This reminds me of when I was trying to figure out the combination to a padlock that I had lost the combo for. I was watching a video because I couldn't find text. I didn't know if the whole video was instructions or if he gave a demo at the end so I had no idea how involved the instructions were. You find where it sticks, write down those numbers, then find those that are all multiples of certain other numbers, divide, multiple, pick several from the list fitting something, do some combinations based on the first and second, and so on. I'd have been a lot more prepared for the process -- and a lot more careful -- if I'd known the actual level of complexity of the whole process at the onset rather than a continuous stream one-more-thing.

All the Web framework has to do is take requests from somewhere, figure out what generates the content for them, calls them, let them (none of your business at this point) call whatever they need to generate themselves, and let them have an area to dump temp variables into so the parts of the page can communicate with each other. So, here's my challenge to you: next time you go to use a Web framework, write your own instead. I bet you can do it in 10-20 lines. 100 tops. At a certain point, you have to shift focus from the end users to the developers. I think someone should write the book, _Writing Web Frameworks_.


by scrottie at March 07, 2008 02:38 AM

March 05, 2008

PhxBSD User's Group

PhxBUG Meeting - Beer & BSD

2008/03/04 - 7:00pm
2008/03/04 - 10:00pm
2008/03/04 - 7:00pm
2008/03/04 - 10:00pm
Casey Moore's Oyster House Map

This month will be Beer & BSD! Last month was our final meeting at the old location, and next meeting should be our first meeting at the new location. In the meantime the people at Casey Moore's have promised not to run out of our favorite beverages, so let's hold up our end and keep the BSD chat flowing.

by dwc at March 05, 2008 02:00 AM

February 07, 2008

Scott Walters on

Programming Perl, except s/P/6, s/e/5/, s/r/0/, s/l/2/.

Perl can do anything. That's kind of boring, in a certain way [Footnote 1].

Because Perl is so powerful, it's that much harder to impress people with any particular thing you do. There are multiple reasons to want to impress people; perhaps you're vain. Perhaps I'm vain. But I also just enjoy code as a highly formalized system of, basically, poetry [Footnote 2].

So maybe that's why I'm writing 6502 code for the Atari 2600 right now. Or maybe I just always wanted to. Or maybe I miss writing 6502 for the Atari 800/XL/XE computer, but I'm enjoying the better development tools (vi, dasm, cc65 on a fast host, generating tables from Perl). But that's not why I started this article.

I had a thought. It requires some background. It's not interesting outside the scope of Atari 2600 programming, but if I pulled it off, which I'm not even going to try to do, I'd impress. The 2600 has no frame buffer. To display an image on the screen, the processor has to spend all of its time basically copying data to the screen by way of hardware registers. For the most part, the code looks like LDA #$xx, STA $reg, with some table lookups off of the X and Y registers too. If you have a RAM based cartridge, like the Starpath, you can write self-modifying code that generates code to do nothing but LDA #$xx, STA $reg, over and over. That's as close to optimal as you can currently get. There isn't enough time each scan line to update all of the registers. You can position a sprite or two, update the sprite colors, but then you don't have the cycles to update the 40 bit wide background, or vise versa.

But a cartridge could be even *better* than RAM based. What if the code did nothing but LDA #$00, then STA to registers over and over while another chip actually drove the data bus? The 6502 would be sending low signal on all pins, but if there were some Xilinix style programmable logic on there too that knew how to just blitter data from a buffer straight to the data bus, every third cycle (with the other cycles being STA instructions read from ROM and then the one byte page zero address to write to)? You could *double* data throughput to the screen. Rather than screens being little programs, with lots of index registers and lookup tables, screens would be pure data -- what to write to the player and missile position registers, the background data registers (which must be updated *twice* a scan line if you don't want the right side of the screen to mirror or copy the left side), player color registers, background color registers, etc, etc.

The 2600 doesn't export many pins to the cartridge, so the ROM can't bus master. But with this scheme, it could be faked badly. Some register could be written to to tell it to wait to start asserting the data lines every time it sees a page zero STA. Then the register could be banged again to make it stop copying data. It would need its own little ROM or RAM or something. STA zero page takes three cycles. There are 76 cycles per scan line. One STA should be done to the CPU sync register, and to keep the size of this routine from being completely unrolled, a branch should be done, which leaves enough cycles for 22 register bangs -- unheard of.

Oh, even better, if the code does one LDA #$00 then STA $00 over and over, then the PLA could assert both the data and the address, and then each line would be even more free form for things like updating player position to make players appear in multiple locations on the same line on some lines while changing colors on others, etc.

Footnote 1:

You do something hard or clever, and people say, thanks, you saved me the work (even if they're half serious, or if they really wanted it). Marc did Coro, which is mind blowing, and no one really... well, no one's mind was really blown.

Screw you all. You're spoiled rotten with all of this great CPAN stuff.

Footnote 2:

Sometimes people write Perl where the source code is literally poetry, but more often, it's beautiful for other reasons. Poetry makes clever use of words. Code makes clever use of interfaces, glue, algorithms, etc. Good poetry, like code, inspires awe and makes us feel like the world is a little less desolate of stupidity and coldness.

Someone wrote how about they considered code to be a specific kind of art -- a performance art. I don't remember their original argument, but I remember it jiving with me. It's more of a skill than a specific accomplishment, each work has very limited scope, it doesn't age well (except as archaeological relics).


by scrottie at February 07, 2008 10:50 AM

January 31, 2008

Scott Walters on

Databases as a message passing back end

Beating this same horse some more. It isn't dead yet and I want it to die.

In the late 60's or so, MUD was created -- multiplayer computed text based adventure game. Brits dialed in, paying by the minute (some getting addicted, running up huge bills, and going to jail) to play live with other players. I don't remember what technology it was built on, but it's still the gold standard today -- the richest world, most dynamic environment, most complex play, best grammar parser, etc, etc.

It's popularity gave rise to a number of free clones, written essentially as free software (MUD and MUD II were not and are not free). One of the first ran on a GE mid-frame running GECOS. GECOS had no provision for IPC or for server processes, so each player got their own executable spawned. To communicate with other people's processes (and thus be multi-player), each process had to write out a file log of what that player was doing, and another program would read all of those files and write out files for each player of what the results of their commands were. It was IPC not through pipes but through files. It sucked. It was a terrible kludge.

Now we have Unix, and we have pretty solid IPC -- for programs running on one machine. If you make an app that runs on one machine, you can have great multi-user or multi-player capabilities. But that isn't "scalable". And in the late 1990s, we re-invented scalability. It now means that you have a database backend, and message passing is done over the database backend. Just like the MUD clone, the processes can't talk to each other except through this explicit read and write message passing system. It's so necessary that it actually feels good. Just like people adapting to Pascal's useless type system, successfully adhering to the discipline gives a rewarding feeling of accomplishment, and after a while, we forget that the discipline is ultimately pointless.

The irony is real scalability -- not the kind that novices learning to program during the dot com boom invented -- was largely created as a way of supporting extremely large databases that wouldn't fit on a mere two or four processor computer, or even all of the processors you could ram in one physical cabinet and be able to move the cabinet. So we're using database technology to scale applications, but we're not using the _real_ scalability technology that enabled databases themselves to scale, so our "scaled" applications only scale as far as a database on a dual processor, 8 core machine will scale.

Okay, a few places run Oracle or SQLServer for their database for web apps and do have database servers with hundreds of processors. That's another kicker. Microsoft get it right on this one but we web app programmers can't.

Here's the upshot. A good database server running on a good operating system will run the same application spread across hundreds of processors -- or even thousands. This technology exists. It's called "single image". OpenSSI is one implementation. It's been hacked onto Linux. Go use it. Write apps that store transient user information in RAM in a very large distributed machine and put persistent data in the database. Cache in RAM. Don't re-adapt your software architecture yet again to use memcached or other sorts of caching. Don't cache data locally on the server and have each server redundantly caching the same garbage. Cache it once on one large computer built out of all the smaller computers. OpenSSI is memcached -- for Linux.


by scrottie at January 31, 2008 12:05 AM

January 25, 2008

Scott Walters on

*My* Perl Wishlist (Hint: Not Your Wishlist)

Perl 6 (as specified) exists as the sum of other people's wishes. Some of them are quite awesome. But here's what *I* want in Perl.

* I want to see the Perl grammar written in Perl. The optimizer has already been rewritten in Perl along with a means of plugging in additional optimizations (for fixing up the code to change it's semantics, inspect it for correctness, etc). Having the grammar in Perl will allow it too to be extensible.

* I want to see the various ops in the pp_*.c source files implemented in Perl. Educated readers will see where I'm going with this.

* I want to see a B::CC counterpart that works very, very well. Ops that store state in their own op structures will have to be modified to stop frickin' doing that in the spirit of ROMable code. That's .. and ..., at least.

At this point, Perl 5 is self hosting. Perl is written in Perl, and Perl compiles Perl down to machine code. This is important for a bunch of reasons. It's easier to modify and extend the language. Working in C is a PITA. Computers can do a better job optimizing large programs than humans have time or energy to. The interpreter would be retained, so programs can be run immediately or compiled and then run as a binary.

* Good static analysis done on the parse tree (which is the same as the bytecode tree) so that optimizations can be proved safe for the compiler. For example, a scalar could be proven to never hold anything but numbers, eliminating a lot of calls to pOK. Variables could be proven, in many cases, never to be tied, or having various other magic on them, eliminating large numbers of checks that are done on the variable before and during their uses in the present interpreter. Rather than inlining the contents of the pp_*.c definitions, a *subset* of the contents of the pp_*.c op definition could be inlined. There's no real win on compiling or JIT-ing until this kind of static analysis is done and it can be proven that large amounts of code can be omitted from the op definitions in specific cases.

* More and more Perl 6 is implemented on the magical, mythical self-hosting Perl 5. Perl 5 should slowly morph into a reasonable subset of Perl 6. XS back compat should be broken for cases where the macros can't be adapted to work, which is about any time magic or anything severe is used. Modules that depend heavily on the core do important things and would be moved to core -- autobox, Coro, Padwalker, etc.


by scrottie at January 25, 2008 10:04 AM

January 16, 2008

Scott Walters on

What Perl is missing from AtariBASIC and LPC

I can't remember if I've talked about this already. If so, sorry.

In AtariBASIC, and probably other 8-bit BASICs, if the program encountered a fatal error (yes, there are fatal errors in BASIC, don't be daft), the program would stop, but it wouldn't abort. The code could be modified (by re-entering the code with the line number before it, thereby replacing the existing line) and then the program told to 'CONT'. And it would continue where it stopped. While it's stopped, you can do the usual REPL (read eval parse, whatever) stuff and print out variables and otherwise inspect or alter program state. And you can hit the 'BREAK' key, something like Control-C, at any time to interrupt it. This is a fantastic way to learn and experiment with algorithms, as well as to debug algorithmic code.

AtariBASIC was my first. 6502 assembly my next. Then LPC. LPC was the interpreted C language used by LPMud (Multi User Dungeon). A whole bunch of players would be running around, killing things, healing up, selling loot, trading stuff, stalking each other, etc, etc, while at the same time, other players who happened to be wizards were developing the game from the inside. They could call methods on objects to query or alter their state, and edit the code of the game. Quite often people would be standing around in a wizard's workroom gabbing while one or more were editing code, saving it, reloading the corresponding object, testing it, editing some more, etc. Editing in ed/ex (text mode, what came before vi) while people are gabbing, with code and prose mixed, was a challenge.

Both of these things are missing from Perl development.

I did a bunch of work on a zombie game ( is not stable despite my efforts and so the additional efforts there are sapping my strength). I can't just work a project through. I have to keep making it harder and harder for myself until I finally can't manage. I seem to be doing that with But I also did it with my zombie game, which I haven't touched in months, probably half of a year. I'm trying to bring the better parts of these elements to it.

I guess from the onset of the minizombies, I wanted to be able to teach people Perl like how I was taught LPC -- interactively, live, inside of a game. People looking at my code as I save it, talking to me, and having the code, upon command, reloaded into a live game for players to experience immediately. Code should be a social thing. It should also be an entertainment thing -- the wizards provide entertainment through their creations. It should foster creativity. People should be able to learn by example, modifying each other's code in simple ways at first. Programmers should take a 1st person role in an object universe, themselves represented by an object, their player body, with methods and mutators and accessors. They should be able to call methods in themselves and their friends (and enemies).

As for tech, there's a JavaScript implemented of vi stolen off the 'net, a file browser, and a button to reload code from disc. There's an eval box for immediately, directly manipulating the universe through method calls or reporting on its state. The whole universe gets serialized with Storable and written out to disc and reloaded upon server start. And of course, there's chat built in. It's Web based -- did I mention? No, I forgot that part. That'll never be as quick as a straight telnet connection, but oh well. As for stopping, atlering, and restarting code, I have Code::Splice and there's Cont. Gluing those two together should prove interesting...


by scrottie at January 16, 2008 05:27 AM

January 10, 2008

Scott Walters on

Electronic Gaming (Gambling) perspective on E.Voting

Backstory: Years ago, Vegas gaming, or gambling, was run by the mob. They reported profits to the state, and the state taxed them, but not until they heavily skimmed off of the top. There was no transparency, only a set of nicely doctored books. Everyone knew this was happening but no one could prove it.

Eventually, the state got fed up. Gaming was big business, and they wanted their cut. Enough was enough.

There are a lot of places "gaming" is legal, especially electronic gaming. Vegas is only one of them. Almost all of them are regulated but none as heavily as Vegas. This has not put the industry of electronic game machine makers out of business. In fact, people continued to make games for outside of Vegas, but being certified for Vegas is a massive vote of confidence and that helps reassure your customer, whether he's Indian Gaming, in China, Central American, or off shore.

Now, let's talk about the regulations themselves.

Source code auditing:

E-Voting: Source code is not audited. Vegas: Source code is heavily audited and the device manufacturer pays for lab time for the state technicians. If you they're ever not satisfied with the security of some implementation, they have unlimited authority to make you jump through arbitrary hoops in the code to satisfy them.


E-Voting: Inexpensive locks that can be defeated by bic pen cases are used with stickers. Vendors promise not to alter the code on the machines (that was never audited in the first place). Vegas: The code approved at the labs gets hashed and the size and hash of every file on the filesystem is recorded at the lab. Machines in the field are inspected to make sure that the size and hashes of the files have not changed. Any attempt to alter the code of a game on the casino floor carries a strong risk of detection.


E-Voting: Vendors can make excuses to worm their way out. Multiple screw-ups are tolerated with nothing more than a vague promise to keep it from happening again through unspecified means. Time constraints are a valid excuse. Vegas: Swift, vengeful, kiss of death for any below board behavior. Time, money, and other excuses are meaningless.


E-Voting: Software written by the manufacturer prints off a total. This is taken as gosphel. Gaming: Every coin in, every pay out, every random number generated, periodicly, the seeds are logged so that Gaming can reconstruct the state of the random number generator through every step of game play should anything seem dubious. Data is not only accounted but examined statistically in spot checks.

Regulations in general:

E-Voting: fluffy. Vegas; Every time a machine is comprimised in any way, regulations are revised with multiple provisions in earnest effort to keep it from happening again. Through years and years of this, the regulations have become voluminous. It's extremely important to note that many times, Vegas gaming machines have been compromised, usually by inside jobs, but sometimes by well organized outside parties. PRNG prediction attacks have been mounted, extremely sophisticated timing attacks successfully executed, tiny probes with cameras and serial ports stuck through vents, and so on and so forth. Without thorough auditing and transparency, many of these attacks would have gone undetected and improvements to security never made. The unwillingness of the Federal Election Commission to audit is telling.

Computer security:

E-Voting: Removable flash storage that can contain autoexec.bats or otherwise automatically executed code. No encryption. No digital signing. Re-used passwords. Public ftp sites on the Internet where data is uploaded. Vendor personal are considered absolutely trusted. Vegas: Nothing in player is ever connected to the Internet at any point for even a moment. Physical separation is maintained, meaning sneakernets of flash dongles and CD-Rs are used during development (I assure you, this is a bigger pain than it sounds like). At numerous points, cryptographic singing, public/private key encryption, and other mechanisms are used. All openings of the cabinet are logged, along with the floor manager responsible and other data. Control systems have separate logging systems, required by law to be in separate rooms. Should the logging system go off line, the control system must boot all users and end all game related tasks within seconds. Both systems must be monitored by camera at all times with video footage archived indefinitely. In general, the requirements are designed such that no single personal would have any means available to them of tampering with the system. Multiple parties (the casino, the Gaming Commission, floor managers) are all able to independently validate the integrity of the systems.


E-Voting: Paper vouchers are "too expensive". Gaming: Most casinos make heavy use of "paper in paper out", where rather than dispensing bills, the machines print coupons that can be inserted into another machine or cashed out at the pit. In the course of an evening, large numbers of these tickets might be generated for each player.

I'm sure I'll think of more later, or feel free to ask me about area of possible difference. I'll probably have something to say about it.

In short, though, it's disgusting to a dozen odd gaming companies get so many things so thoroughly right (and a few less than perfect, in my opinion, of course) while the two or three companies making electronic voting machines consistently get them wrong, and no one is held to count for it. The hubris is amazing. I don't think anything short of everyone taking up torches and pitchforks would motivate congress and the e-voting industry to raise their standards. Like so many broken things in government, they seem entirely too eager to keep things broken. Anyway, nothing that people have proposed -- cryptographic papertrail, auditing of votes, auditing of source code by trusted agencies, strong cryptography, physical tamper resistance, network isolation, logical tamper resistance, and so on and so forth -- can't be done and hasn't been done over and over. Someone with experience in the banking industry probably could make an equally scathing comparison. This is just disgusting. We've traded democracy away for some weak excuses from people who can't get security right but still somehow manage to claim to be experts.


by scrottie at January 10, 2008 09:48 PM

January 02, 2008

Scott Walters on

CF-R1 drive crashed; should I buy a P1120 or fix the CF-28?

Appealing to group think, something that sometimes works here. I love my Panasonic Toughbook CF-R1. It's old enough that APM works, zippy at 800mhz, and tiny at two pounds. Plus it's kind of tough. No fan to clog, keys pop off and right back on for easy seaseme seed removal. But parts are a bitch, as are repairs. Replacement LCD panels cost $500, if you can find them, and I spent a week calling and requesting quotes to find that. It takes a funky 3.3volt only HD (with no 5volt connection required) but a lot of drives can be shoehorned in reportedly. This is the only machine that's survived being toted everywhere for more than a year. I have some that look like hell, with parts falling off all over and keys worn through. After years and years of use, it's little 20gig drive died. Yeah, I have most of the data backed up mostly recently. Never as good as it could be. Anyway...

I also have a CF-28 fully rugged Toughbook (like you often see in police cruisers) that needs a new backlight. It's easy to upgrade as parts are accessible. It also has two PCMCIA card slots (plus an extra hidden one) plus two mini-PCI slots. Battery life is pretty rotten even with the big battery. It's heavy. You could fit a Trojan army in its expansion bay and I'm not used to even having a CD-ROM drive so I don't see it as a real selling point. Of course, when I got this machine, I really wanted it, and my CF-R1 was laid up with a broken screen and I wanted something that wouldn't have that problem, but I missed my tiny CF-R1 and got it fixed anyway and abandoned the CF-28.

Or I could get a Fujitsu P1120 on eBay pretty inexpensively. Someone has them for $250. I should pick up one just to have it around. It's also two pounds, but at 2.5 pounds with the extended life battery, it'll run for a solid six hours, which is better than twice as good as the CF-R1. It's also 800mhz but it's a slower 800mhz with the Transmeta. Screen is smaller and less likely to break even if the CF-R1 does have paper thin magnesiam on the back of its screen. It takes normal drives. It has WiFi built in, freeying up the PCMCIA slot. Taking normal drives, I could easily adapt it to take solid state drives with an IDECF adapter. Someone even has a dual CF adapter that makes one the master and the other the slave (sorry, don't have the link, HD crashed). I could run software RAID on solid state drives -- w00t! And though it weighs as much as the CF-R1, it's physically smaller, making it more pocketable. I've been carrying an external battery for the CF-R1 to get more run-time. I'd carry another internal battery except that software suspend would lose my state.

So I could fix up the CF-28 Toughbook, fix up the CF-R1 (perhaps even with a larger drive), or I could buy the P1120. But I'm very low on funds.

Gah. I also need to put the money into a machine that I'm going to be happy with as my development environment, as it takes time I don't have to move everything, and I have quite a lot of software that isn't in any apt or rpm repository.


by scrottie at January 02, 2008 03:32 AM

December 26, 2007

Scott Walters on

All about blessed stash objects: better living through evil

Typically, when you write an object in Perl, you combine two disjunct things: data, as stored in a hash, and methods (code), stored in a package. bless makes this association. Any reference type can be blessed, but these references may only be blessed into a package. Here's the trick: packages are data types in Perl, one of the 15 odd types, and you can take a reference to them: \ %{"foo::}. This means that packages can be blessed into packages. This trick involves resurrecting some of the oldest datastructures in Perl, globs and stashes, for modern, OO purposes. A whole lot of fun ensues.

Okay, I'm going to stop calling packages packages; they're created with the package statement, but they're better known as a stash, or symbol table hash.

Like hashes, stashes contain data of various types, indexed by name. When you don't use my to declare your variables, variables are also stored in the package, as was the way in the olden days. Functions declared like sub foo { ... }, the common way, also get stored in the stash. Normal function calls and normal method calls (like foo() and $ob->foo, and unlike $hash{value}->()) all operate on stashes.

Newly constructed blessed stash objects are empty of methods. Code references get copied in, initializing them as a copy of another object's code. This is a "prototype based object system", as is JavaScript's. JavaScript objects are hashes, with the key being the method name and the value being code. Since each object has (indeed, is) it's very own stash, we can use define our methods in terms of closures.

Here's some code to create one of these puppies, and to create a method function that will neatly stick closures into the stash for you. This is old code I've posted before; sorry for the dup. I'm trying to turn this into a more accessible article.

sub new {

        # object setup (evil, run)
        my $type = shift;
        my %opts = @_;
        my $package = $type . sprintf '::X%09d', our $counter++;
        do { no strict 'refs'; push @{$package.'::ISA'}, $type; };
        my $self = bless \%{$package.'::'}, $package;
        sub method ($&);
        do { no warnings 'redefine'; *method = sub ($&) { my $name = shift; *{"$package\::$name"} = shift; }; };



Then inside there, you can write methods like so:

    my $arg;

    method foo => sub {
        my $self = shift;

$arg is a lexical variable that the method foo closes over. Each time new gets called, a new stash is created, and a new $arg gets created, and a new coderef attached to that new $arg gets created and rammed into that new stash. New everything, each go -- that's the trick.

When you write "package" in your code, you're defining a new stash. They also autovivicate (spontaneously spring into existence by their mere mention). That looks like %{"foo::"}. Yes, that's similar to a computed hash name (and also requires no strict 'refs') but the name ends in a double colon.

my $package = $type . sprintf '::X%09d', our $counter++; -- this computes a new package name based on the existing one plus a serial number.

do { no strict 'refs'; push @{$package.'::ISA'}, $type; }; -- this forces the new package to inherit from the base one, so that it in turn inherits what it inherits.

my $self = bless \%{$package.'::'}, $package; -- this creates the object as this new stash blessed into itself.

sub method ($&); do { no warnings 'redefine'; *method = sub ($&) { my $name = shift; *{"$package\::$name"} = shift; }; }; -- prototype the method function to take as args a scalar and code and define it was stuffing that code into the stash under the given name. That's the glob syntax. Stashes contain globs. Only references may be assigned into globs, and by assigning in a code reference, a new method is created.

Stashes a derivative of hashes, but rather than containing arbitrary types, they only contain typeglobs, which may in turn contain any other type. This way, you can both a $foo and a @foo, as a stash in turn can hold one of every other type.

Instance data is then hidden away in lexical variables where subclasses can't see it. That's not always what's desired. In a blessed hash, you could write $self->{foo} to get at a data item. Since stashes only contain globs, you'd have to instead write ${ $self->{foo} }. To access an array stored in a normal blessed hash, you'd write @{ $self->{foo} }, which is the same for blessed stashes. Everything is stored by reference, including scalars, in blessed stashes. Data::Alias can make this a lot easier:

    use Data::Alias;
    method foo => sub {
        my $self = shift;
        alias my @foo = @{ $self->{foo} };
        push @foo, @_;
alias gives you fully read-write variables that are aliases to data stored in the object.

Data stored in $self is actually stored the same way that local data is stored. Only the syntax is different.

Before you get started, a few caveats: methods built out of closures (which access instance data simply as $foo rather than $self->{foo}) take a lot more memory than normal methods. Stashes don't get garbage collected at all; data stored in them is considered "global", and this data includes references to the closures and references to lexical variables. It probably contains circular references. You may think about writing a DESTROY routine to tear everything down.


by scrottie at December 26, 2007 01:34 AM

December 20, 2007

Scott Walters on

How I mismanaged Phoenix.PM into the ground

I've been writing for a while about the state of Phoenix.PM. This is the latest chapter. I wanted to give a talk on new stuff in Perl 5.10 as well as just have a general meeting. Some back story is near the end.

At one point, we had a lot of people just getting into Perl, people who were doing fun stuff with Perl on their own; people interested in what was going on in the community and in any techie Perl stuff even if they couldn't use it themselves; people who saw whole Perl careers ahead of themselves.

Then we stopped getting new Perl programmers, except for a light trickle. Then we only had people who wanted techniques they could apply to their daily work, which was mostly them being in over their heads trying to do some work in Perl or else accomplished Perl programmers doing uninteresting maintenance work who don't program for fun; now, thanks to my bumbling mismanagement, all that's left of the group is three guys that don't even actively use Perl and never really did that much anyway plus one who does actually work in it.

Granted, I didn't give much notice and it's the holiday season, but I actually caved and offered to give a presentation.

The current state isn't all my fault; a lot of Perl programmers were lost to C++ and whatnot when things got tight and the older, more stodgy industries were all that were left. When Python broke, a lot of Perl programmers decided that it suited their personality better, and they really wanted a break away from all of the not-so-clean they had been working on. A chance to write new code, from scratch, in a language whose culture valued cleanliness more than Perl's was a welcome change for them. Then PHP broke, and the snotty little brats that didn't think they should ever have to learn anything, just call routines from a library (nevermind algorithms, complexity management, security, or any of that), all pissed off and stopped coming to Phoenix.PM looking all glassy eyed all the complex (OMG!) things people were doing in Perl. Then Ruby struck, and their culture was filled with a hackish spirit that valued neat APIs, cool tricks, minimalistic design, and above all else, fun, something that we who had been doing predominately maintenance work in Perl for all too long could definitely appreciate.

So all we had left was Perl programmers who, for whatever reason, didn't defect from maintenance work in Perl, and some stragglers who attend meetings for reasons entirely unrelated to the content of the presentations or the Perl theme -- two of the three just attend a lot of technical meetings for technologies they don't actually use or use very, very little.

I tried a few things -- I wanted to virtualize the meetings, so that Perl programmers who were too busy to come to meetings or even post on the list could more easily keep in touch with each other and what other Perl programmers in Phoenix were doing, thinking that people would blog about stuff at work, their own take on technology, what they were doing, or even how their family was. Nope. Two people of the hundreds on the list had a blog and one I dug up Googling. He asked me to remove the reference to it as it was entirely off topic to programming. I ruffled a lot of feathers trying to get this idea to fly and in the end only alienated people more.

So I decided to have laser tag meetings, or try to get people out to ski, or just do something social. Again, no show.

In retrospect, the only meetings that would have gotten attendees would have been variations on "how to do small scale projects in Perl without learning very much at all" or "ways to more rapidly diagnose and troubleshoot problems in large hairy Perl programs and avoid having to do so in the future". In fact, I did one or two of those on subconscious instinct, and sure enough, that's what got people out.

Back Story:

Brock (Awwaiid) left town and left it to me. Being one of very few presenters, and presenting frequently, I've been a sort of second in command who assumed pumpkin by default. Towards the end, I was sick to death of only Brock and I giving the presentations. I like making little presentations, but it had turned into the Brock and Scott hour. So when Brock left, I started trying these various things to reinvent the group as something other than me sitting there talking to a small group of people, most of them who wouldn't be back because they didn't get anything they saw as immediately applicable to their work programming from the meeting.

No offense to the people that said they could make it; it's good that you wanted to come hang out with me and each other. But, from my point of view, not even the maintenance programmers

By the way, I'm not trying to convince anyone that Perl is dead; Phoenix is a special case. People here are apathetic and distant in general; anyone passionate about programming would be sorely tempted to move to silicon valley, Boston, or anywhere else, and those here that do have person interest all seem to be interested in moving somewhere else. Unless all you need to live is McDonalds, Home Depo, and so on, Phoenix is not for you.


by scrottie at December 20, 2007 01:24 AM

December 06, 2007

Scott Walters on

Mini-helicopoter, bleah

There are a lot of these. This is the same that ThinkGeek seems to be selling. It has only one rotor but has a stabilizer that's slightly offset from the rotor and connected to it. For the past few days, I've been trying to learn how to fly this thing. Here's what I've learned.

Trim is essential. But don't bother with the trim. As the battery drains, which happens extremely rapidly, the trim completely changes. It'll start trying hard to spin to the right and then wind up spinning to the left. When it has juice, the main rotor has an abundance of power, but as it starts to get low, the tail rotor overpowers it. Keeping constant balance between the main rotor and tail rotor is essential to avoiding a spin. Control changes must be made slowly and gently and control made at all time or else, if it's moving, it'll start sliding back and forth or side to side on on a cousin of air, like an unstable parachutist. For about twenty seconds, it has juice and can lift off with ease (or very delicately if you want to avoid a spin), but for the entire twenty seconds, the controls and basic situation changes radically as the main rotor loses power and you move from pushing all the way right on the tail rotor to all the way left. After twenty seconds, it barely slides along the ground, not enough power left for the rotor to right it no matter how hard you push on the stick, tripping over any slight bump in the floor (forget having any rugs or the like on the ground), falling over, and gnashing its blades. So, it has enough juice for a two minute flight, as advertised, if you consider trashing around on the ground a flight.

Your heart might be glowing as you think, "wow, that must be just like a real helicopter! I want one! I want to learn how to fly a real helicopter!". No. Real helicopters do also require slow, gradual control changes, slightly disbalancing controls to affect an attitude change, then correcting them. They also have the power to do so. This thing does not. Real helicopters are capable of hovering. This thing is not. As far as pitch adjustments of the main rotors to maneuver any direction or stay in place, I don't expect that from a $30 toy, but a small amount of logic to either emit pulses to drive A/C motors or else regulate power to control the D/C motors so that the torque of the main blade and the tail rotor are balanced at all times would go far. Also, it needs more battery power, but regulated battery power. It burns like a roman candle, crazy for a brief, glorious few moments, then fizzles out. The problems of the toy so badly make it want to rotate one way or the other oblivious to control that for a while, I thought the thing was broken and the control didn't even do anything. It's own problems far outweigh the real problems of flying a helicopter. It's annoying the crap out of me. I *really* *really* want to learn to fly it. Grr! Guess I need to get a better one... oh, and it's insanely cute.


by scrottie at December 06, 2007 10:05 PM

December 05, 2007

Scott Walters on

Sometimes I'm fine, sometimes I'm not

Strange hotels. B&Bs. Friend's apartments. It's remarkable how often I'm okay. And it's remarkable how often I'm not, even when I should be...

The poor neighborhoods aren't necessarily problematic and the rich neighborhoods aren't necessarily safe. Old places aren't necessarily bad either and new ones aren't necessarily good.

So many things can go wrong and it takes so little. And end table from a thrift store that was somewhere that was sprayed heavily, until it absorbed into the wood and now slowly off-gases. Neighbors across the street and a house over who come out with a can of RAID every day to try to chase ants away from the barbecue grease (futile; as long as there is food, there will be pests). Exterminators 30 years ago pumping the walls full of something that's extremely environmentally persistent and now banned (dozens of organophosphates have been banned, starting with DDT, yet they still keep making more, right to this present day; RAID and BlackFlag are primarily organophosphates with some others added).

So, a lot of thought goes into figuring out "where it's coming from". I do that insane-looking (and possibly insane-being) peering out the window a lot thing. If I hear a noise, I wonder if it's the exterminators and I have five minutes to run or else find myself in extreme pain. Or I wonder if the neighbors are out back spraying, and that's why I've been having problems lately. I'm not the only MCSS to struggle with trying to figure out whether I'd be better off with the windows open or closed. Airing a place out is often necessary (though as the outgasing continues, generally of temporary use), but timing it right is essential, or I'll wind up with a much stronger dose.

Then there's another tiring dilemma... when to call it quits on a place. Can I make it through the lunch I've ordered at a restaurant without making a fuss? That one truck stop that always pegs me hard, do I just refuse to go in? Should I even bother to try new coffee shops after having tried thirty and having three on my safe list? How much can I put up with in an apartment considering that breaking leases, awkward encounters, hunting, and the heartbreak of not having something work out also takes a large toll on me?


by scrottie at December 05, 2007 07:55 AM

November 10, 2007

Scott Walters on

Moby Dick *does* *not* suck. But old media papers do.

Fountain Hills is a small burb outside of Phoenix. Scottsdale grew into it somewhat recently actually connecting it to the metro spraw. Fountain Hills Community Theather is our community theater. They're currently doing a one man version of Moby Dick. Some dumb bimbo wrote a review that seemed to entirely miss the point, that of the show being a one man Moby Dick: .html

I posted a reply and duplicated it below the cut here. Let this be a lesson to dumb bimbos everywhere who think that you don't need any grasp of logic to write a theater.


The central premise seems to be that Fountain Hills Community Theater's rendidtion of Moby Dick stinks. Then the first argument to support this is, briefly, is that the book is a taxing read, and that the only situation in which it would be read is under a mixture of requirement and duress. This line of argument would suggest that any faithful performance of Moby Dick would be tedious, so it's confusing why FHCT's performance is singled out.

But I'm not just picking nits here. The basic questions I have while reading a review seem to be dodged. Another central argument against the the FHCT performance is that it lacks actor-to-actor dynamics, set changes, and so on. You're reviewing a one man play, and this performance is billed as such. There are well respected one-person plays that get good reviews, and those good reviews sometimes have the caveat of the dialogue being hammy at times, so what sets this one apart? It seems unfair to give poor marks to a one man play simply by virtue of it being a one man play. If I were a reviewer asked to review a one man play and I hated all one man plays, I'd at least disclose my bias. So, the question remains unanswered, for those of us who like a well done one man play, is this a good one?

Let me put it this way. I could look at the billboard outside of the theater and see that they're doing a one man Moby Dick and write this same review: it sucks because it's one man. It sucks because it's Moby Dick. The name of the directory and actor are published and could be referenced easily enough. If I were a little clever, I could guess that a few people couldn't stomach it for one reason or another and left at intermission. And that an old lady fell asleep at some point is a safe bet for almost any play. So, there's nothing in this review that gives me cause to think that you've actually seen the play. It seems like an amature hack job.

For the benefit of other people reading this review and associated commentary, I have seen this play, and my friends and I are passing this link around with some consternation. Here's an alternate review: the play is the story told from the perspective of Ishmael. There's one person on stage because an aged Ishmael is telling you the story as if during an encounter in a pub, but he, slowly at firsts, begins characterizing his shipmates. This is not a play where a smaller number of actors than characters attempt rapid changes to create an illusion of there being more actors, such as in Shakespear Abridged. It's simply one actor characterizing others in a story that he's acting the lead role in, and in my opinion, doing a fine job of it. The language is dense, the lighting dark, the ocean sounds hypnotic, and the tension slowly building. Regardless, the story is neither light nor cheerfully. Disney-ing it up with a fake whale and fanciful costumes would only detract from the story.


by scrottie at November 10, 2007 05:11 AM

October 29, 2007

Scott Walters on

Unix-admin-fu Q's for the shared host/free shells business

Let's say I go the colo route and put somewhere that costs money. After this background, there are technical questions.

I don't want to be in a position where I have a bad month and can't pay my hosting bill, as would often happen, and AdWords pays paltry amounts of dough. Tens of thousands of impressions of were coming in around a dollar -- no where near worth the annoyance to readers of the site. If I set up a free shells/shared host deal, there's potential of donations, which couldn't be any sadder than AdWords. has an on-line sign-up. You can mail them postal mail and get an account for the cost of the return stamp, but most people choose to PayPal a dollar on their online signup instead to get the account instantly. And of course, the little form suggests you donate and give them $5, $10, or $20 dollars instead. I gave them $10, just on a lark, but probably wouldn't have if I'd known they didn't permit CGI. They don't allow outside executables at all, either.

So, my question to the community is, how could one go about securing a machine in a similar, but more permissive arrangement?

Firewall rules could keep outgoing connections from contacting mail servers. Are there just too many things that would have to be blocked? Would I have to block all outgoing traffic like free shells often do? Most free shells prohibit IRC relays and IRC access except to small, designated IRC networks. Assuming I've collected a dollar from PayPal and therefore presumably could cooperate with the police on matters of abuse, would it be imprudent to allow these users access to IRC in general? I'm not versed in quotas -- how can CPU quotas be enforced? Duke University (no, wasn't a student) ran some AFS+Athena stuff. I'm fuzzy on this, but they were able to cap CPU usage per user in a far more useful way than ulimit's -t argument? Is there a way to cap what percent of the CPU a process uses rather than just how many CPU seconds it takes up? What else would be needed to be able to give people access to CGIs? What else needs to be done to set up shared hosting in general other than setting people's default umasks intelligently and running CGIs as the user? Would it be out of the question to allow people to run arbitrary binaries (ie, run without exec cookies)? And allow people access to gcc? What exactly would go wrong? It seems like if you're going to give people access to Perl, you might as well give them access to gcc. How are run-away processes normally handled if not through accounting -- just something that parses the output of top and kills as needed? What subtle aspects of shared hosting am I missing here?


by scrottie at October 29, 2007 09:25 PM

Compute clouds are doomed, shared is dead, what's next?

kjones4 commented: "I have no answers, but I've been kicking around thoughts on hosting too. Have you considered Amazon's S3 + compute service? It looks to be reasonably cost effective and reliable, but I have no experience using it. The ISP business has changed a lot in the last 15 years. It used to be that I wouldn't hesitate to buy a couple of servers, deploy them in a data center, and manage everything myself. Now, it seems to make more sense to be a virtual ISP, where you buy services from people who know better about specific services such as backup, email, fail-over, etc. The problem is knowing if these 3rd party services are in fact more reliable and flexible enough to do the job. My brain hurts with the thought of researching all of this."

Hmm. First, nope, haven't seriously considered Amazon, or LoudCloud, or IBM's "compute as a service thing", or Sun's Grid. I guess I mentally lumped Amazon's things with the other "we have a giant computer, and we'll sell you some of it, and oh by the way, our computer is really big services".

That's what used to be imagined as the future, as far back was the late 1960s, when MULTICS was being written for just that purpose.

Oddly, I think hardware is being outsourced more (outsourced data centers, dedicated hosts, Xen slices/semi-dedicated host, etc). But the *software* is being outsourced less. People hate shared hosting. And those that don't should. The config is bizarre (SSL and plain http on different machines -- I've actually seen that), the systems aren't maintained, there are too many restrictions (can't daemonize apps, have to use a klunky GUI admin to do sysadmin tasks), etc. Shared hosting is ghetto hosting. The jobs of the sysadmin used to be keeping things secure, audited, updated, logs rotated, etc. All of those tasks have either been given up on or automated. Updates are cron job calls to apt-get. Log rotation is from cron and people stopped reading logs a long time ago. I doubt criminals even bother to cover their tracks any more when they penetrate a system, or if they do, it's just to hide themselves from automated break in audit tools. Since the sysadmin is now redundant, mostly fascilities of the operating system, people want to pick their OS. They want to the freedom to pick FreeBSD, Slackware, Ubuntu, CentOS, or whatever for their host. Things like slicehost, where you get your own Xen slice, are profilerating rapidly. There are hundreds of them now.

I don't mean to pretend I wasn't or am not seriously considering shared hosting, but the question of Amazon's and these other "compute cloud" services that came out recently is kind of interesting.

What is their market? A bunch of little guys? They don't do much in the way of support, and the setup is pretty technical.

Big guys? They hate not controlling their own datacenter, or at least racks in a datacenter. Even companies that have a whole bunch of machines at a colo will have their own little datacenter in the office, too. And it's not just for generic compute cycles: there are Windows/x86 machines for domain controllers and all too often, ExchangeServer (ewwwch). Novell servers -- NetWare, I mean. And so on. All of this could be moved to generic compute clusters, but it would require switching products, doing countless "upgrades", and homogonizing things. Hell, IBM must have an ulterior motive of trying to sell their compute cloud in just getting you locked in to AIX by making you switch to AIX software and replacing your database, CRM, etc, etc. Their TV ads are pretty adamant that everything will run there. And every few days, they issue a press release telling you how much power you can save by replacing all of your computers with a big IBM one that's faster than all of them put together. Similar for Sun. As far as I'm concerned, any restriction which keeps the idiot suits at the top from buying and running whatever stupid piece of software they want, is a doomed proposition.


by scrottie at October 29, 2007 04:58 PM

October 25, 2007

Scott Walters on

Thinking outloud about consolidating servers and data

Sorry, this is becoming a frequent topic post by me.

I need to get off the home DSL line I'm on. That's part of another story.

I hate x86 hardware. It's nothing but fan failures, thermal failures, and blown caps. But for lack of options, I have to soften up on some hatreds. Let's just say that has been through a dozen machines. All of the RISC hardware still runs and none of the various x86 machines, except the original 486, does.

Colo in town starts at $40/month. That's a hell of a deal.

My old RISC hardware is old and slow and takes SCSI drives, which are expensive. My largest drive is a 70 gigger, and it's a half-height drive (all of the drives you see today are third height or quarter height or God knows what, but this drive is a tank). I only have one, and it's in production. I'd rather have two.

Having only the one that's in production, I don't want to move the 32 bit OpenBSD install off, put 64 bit OpenBSD on it and put in the UltraSparc. Too much down time. On the other hand, if I got a root drive set up on a smaller drive and just shoved it in as the /home partition, it could be quick and painless. Hmm.

NetBSD completely hosed the 32 bit Sparc platform. Server processes reliably and quickly wedge in an interruptible kernel state, requiring a reboot to free up the port they're listening on. Likewise, OpenBSD on 32 bit Sparc gets very little attention and as such has serious stability problems. So a move to Ultra-Sparc (64 bit) might improve stability. Or it might not.

I also found some SCSI-IDE adapters (active logic) but since I'd be putting the concoction in the CD-ROM drive bay, I could only fit one. The SCA drive bays only fit exactly one SCA drive.

Regardless, after a lot of time and effort, I don't have a stable machine to colocate.

I have users who are already weary of moves. I'd like to keep running OpenBSD to minimize impact to them.

I have a desktop system with a larg-ish harddrive in it, a 320gigger. I'd like to go to just having the laptop and a server and lose the desktop, and combine storage. Most of the stuff on it I'd rather have online anyway -- mp3s to share with friends, video, etc.

There are these virtual server things now, based on Xen. OpenBSD doesn't yet support Xen. Work has been done, but I don't know how stable it is yet. They tend to be conservative.

Virtual servers are more affordable than colocating real servers if you don't care about storage. If you care about storage, it suddenly becomes prohibitive.

Possible solutions:

Move everyone to Slackware Linux on a shared host and just have a lot of content not be online.

Buy some cheap x86 hardware and colocate it, with some cheap, large IDE/SATA drives.

Cross my fingers and hope OpenBSD/sparc64 is a hell of a lot more stable than OpenBSD/sparc and colocate the Ultra 1 with the one big SCSI drive in it, and also hope that the drive doesn't melt down.

Give up on having users and move onto a shared host deal.

I'm singularly ineffectual at thinking, as past performance so painfully illustrates, and am badly in need of thoughts from external sources. Thoughts?


by scrottie at October 25, 2007 10:25 AM

October 23, 2007

Scott Walters on

ActiveState sucks (a brief survey of Perl compilers)

Yes, I'm aware that writing a series of aricles titled in the format of "$x sucks" makes me a troll. Maybe I'll write later that "trolls suck" and lament that even when trying very hard to approach delicate topics delicately, some people, such as myself, generally fail miserably and are confronted with a choice of not approaching delicate topics at all (a popular choice) or living life as a troll. But that's neither here nor there. Perhaps a better title would be "any attempt to compile Perl results in suckage". Anyway...

So, I've got this project. It involves physical gaming machines installed at real casinos in Vegas -- pretty damn neat. In order to accomplish this, all systems including all code must be approved by the Navada Gaming Commission. Many, many, many things are required for that to happen, but one of them is that all code must be compiled.

"That's dumb!", a thousand voices cry in unison. It's security through obscurity. The effect it has on C code, that of badly mangling it, it doesn't have on Perl. With minimal effort, a good faximilie of the original can be reproduced. But regardless, we need this.

Perl's built-in B::Bytecode (perl -MO=Bytecode) unearthed a string of coredumps, some from the compiler, some from the compilee. Anyway, I've come around to needing to explore this more. Likewise, par and pp utterly fall down, but Bytecode filtering is marked as depricated, so I'm not filing any bug reports there.

People involved in the project before me settled on IndigoStar's compiler. It threatens to work pretty well, but then exhibits brain damage where least expected. Regexes inexplicably fail, and comparisons come out wrong. $^O, printed out, clearly says "linux", but $^O eq 'vms' on my Linux laptop comes back true, taking the VMS case on an if statement. But trying to use a module that uses another module utterly fails, no matter how many #perl2exe_include lines you put in. Argh!

That brings me to ActiveState Perl. There are a lot of fly by night sites that have good graphic design but were clearly done by complete kiddies. I know ActiveState employes well known and well respected Perl personalities, but they send off this vibe, big time. First stop, they make you make an account. Ghetto, lame, but whatever. Download the product. Next step, documentation. Hmm. None of the four or five nav bars say anything about documentation but there's a search bar. Search for documentation: exy&cof=FORID%3A11&q=%A0documentation&sa.x=0&sa.y=0

Zero results.

Okay, the documentation is probably in the tarball. In fact, there's a Welcome.txt, telling me exactly where to find the documentation:

    Please see the PDK User Guide at %%HTMLDIR%%/index.html

cd '%%HTMLDIR%%'. No such file or directory. Alright, probably just a string of glitches. Taking an educated guess and doing some find'ing, I cd into pdk/share/doc/HTML/en/pdk where there's an index.html. I fire up w3m on it. There is a blank page, save for two gif logos. My heart skips a beat. Then I realize that w3m doesn't do JavaScript or Flash, and it's probably an intro page. Firefox. Nope. index.html really is blank.

Version seven of this "perl development kit" thing. Maybe version eight will offer documentation. Originally I hadn't thought to air this bit of dirty laundry and go be a troll. I was going to tease them a bit for their profound bit of silliness and failed at that too. I clicked on contact, and even though I had made an account and "logged in" so I could download the thing, it informed me that I hadn't verified my account and offered to resend the verify email, which I did. Ten minutes later, still no email from them shows up. I bet they don't get a lot of feedback...

So, after this long, tedious, painful journey of Perl compilers not working right, do I really want to try one that has a whole site that doesn't work right and no documentation to be found?


Next question. It's almost impossible to write secure C, and it's very tedious to do anything in it. I hate Python but it compiles well. I don't know about compilation in Ruby. Java is extremely tedious but an option. Dear reader, if I have to rewrite this project just to make it compile, what language should I use?


by scrottie at October 23, 2007 09:53 PM

October 12, 2007

Scott Walters on

Virtual living

Darrin, from the local BSD User's Group who sometimes comes to Phoenix.PM meetings once, during a discussion of online services, lamented about the legions of the "virtual homeless"... people who spread their existence between livejournal (or myspace, or whatever), flickr, youtube, twitter, and so on and so forth. Asked about whether he blogs, he said, sure, on his own machine, spoken with his usual emphasis.

Regular readers of my blog know that I've been having a fuck of a time keeping stable. Maybe the power isn't clean here. Or maybe the electricity is haunted. I also have to move frequently. Way back in the day, I had a 2400bps dedicated SLIP connection to my Amiga 1000 running KA9Q (TCP/IP implementation packaged with several standard things such as mail and remote shell, all in a monolithic executable), long before I registered Later, it was 9600bps PPP on a 386, then a 486 on a dorm Ethernet line, back when you got a real IP address in your dorm room, then cable, then one employer's serverroom, then a friend's T1, then home DSL again, now, God knows what... something.

The idea of a purely virtual existence... virtual homelessness... is suddenly intriguing again. Places like netfirms let you point your home page there. Their Perl is horribly gimped out, and shared hosting in general sucks. Other places offer free shell accounts. youtube will store your videos, if you upload them all, flickr your photos. I really like being able to just dump things in folders and see them, but there are more and more hacks that mount webservices as filesystems through FUSE and whatnot. How far could one go in gluing all of this stuff together? Would it be less work and pain than trying to build a stable system and keep it running and hosted?

But that's largely a projection of my own frustrations in keeping myself hosted and running. I guess I identify with my once cool, but now old, beaten, battered, and unwanted RISC hardware. There's nothing I can do; it'll wind up in the dumpster long before it stops running (except those stupid fucking harddrives, but the one gigers never seem to die). I wonder if I could virtualize my own existence. Greyhound has $522 unlimited ride month passes. Without having to make connections in a timely fashion and be somewhere, I could slowly roam the country, do my work at Greyhound stations, and then sleep on the bus (theoretically). I could use various climbing gyms and dojos to shower. I could visit all of my scattered friends.

I'd have to resign myself to computer homelessness at least to the degree of giving up my RISC hardware, at least having it running. I could move everything to or one of those. Mini-storage might cost about $80/month; slicehost starts at $20. And $500/month for a place to sleep and outlets to mooch power off of (at least outside of Texas). $30/month for GPRS/EDGE from T-Mobile. Food, a couple hundred. Incidentals, I don't know. So I've got about a thousand dollar existence, available for the taking. Could I actually make the most of it, or would my human psyche bog me down?


by scrottie at October 12, 2007 11:32 PM

September 30, 2007

Scott Walters on

Solaris 9: a mini review in an unknown number of parts

Backstory: Linux and BSD crashing on me left and right is one of my recurring themes. So is pining for BSD/OS. Oogling BSD/OS's mindbending ability to get arbitrarily long uptimes each and every time it's installed on a machine, I noticed that Solaris and Windows 2000 also did pretty well. Windows 2000 would require me to retool very heavily. So I decided to try Solaris. Sun would release the free Solaris 10 later encouraging this.

Everyone seems to hate commercial Unixes in the same way they hate commercial database systems: they're buggy, overly complex, and just plain bizarre in the way that a software project becomes when ideas are inbreed repeatedly from a small group of developers closed off from meaningful feedback. Here are some commercial Unixes that have sucked and been hated over the years: AIX (various flavors). HP/UX (much hated), Ultrix, SCO. Well, enough of that. The story seems to be that the longer a vendor has to "refine" the OS, the more they screw it up. Ones hastily slapped together from SysV or BSD and released are fine, as are ones unmodified except for something bolted on: A/UX, AMIX, DomainOS, SCO (in the old days, before they turned it into some bastardization).

Solaris 10 drops support for 32 bit Sparcs, which are the old 32 bit Sparcs (including the original Sparcs). Sun was still numbering these things like they were with the '2' and '3' series (which ran 68k chips) with the '4' series, and then they did a few little "lunchbox" machines with the IPX and the IPC, and then moved back to the pizza box again with the sexy slim Sparc 10, slightly larger Sparc 5, and the the similar Sparc 20, all of which had different support for similar swappable CPU modules (including no support), different RAM, different drive caddies, etc, but were each individually extremely neat, well thought out machines. In the pizza box style, you open them up, and everything is layed out flat, nothing really being stacked on top of each other except maybe a CD drive on top of a floppy drive, or one HD on the other, or cards plugging in above the motherboard. The motherboard on the bottom of the unit is a signature of the pizza box style. Anyway, I have about 40 of these puppies and Solaris 10 doesn't run on them. So I had to order some UltraSparc gear.

Solaris 10 coming out GPL'd seems to have renewed interest in the newer 64 bit UltraSparc hardware. Ultra 5s that already had RAM and a HD were selling at $100 or more on eBay. A lot of the Ultra 5s were labeled as "untested" and they weren't selling for much at all, if they sold at all, which most didn't. Turns out people knew something I didn't: UltraSparc 5s are crap. Sun decided they needed to compete with PCs on price more so they designed them poorly, with random shaped metal fixtures overlapping randomly. The thing is made in China, like most computers now days. This one had typical problems: schitzophranically couldn't see many devices that were soldered onto the motherboard. OpenBIOS told me that it had only been booted 20 times. I tried a Sun made PCI SCSI controller to replace the non-working on-board IDE controller (which bizarrely doesn't has all of its pins wired, so no software fix will get the thing supporting drives larger than 300 meg) but it couldn't see that either. I found and went through the procedure in the Sun service manual and after running serveral diagnostic routines in firmware that I didn't know about, was told to replace the mainboard.

Okay, I'm on a budget here, and my Ultra 5 is no good. At this point, I get the bright idea to buy Solaris 9, the previous version, and put it on one of the affore mentioned non-Ultra Sparcs. I pay $10 for shipping and am annoyed at that, but I get a 10 pound box in the mail with an installation guide that has a quickstart guide which has a roadmap for it. There are license agreements, errata for the hardware support manual, a hardware support manual, and, all told, about 20 little booklets of various descriptions. There are two DVD cases and two vinyl CD books. Then there are other CDs in jewel cases just shrink wrapped and thrown in there. I feel like I've just returned from a Sun conference where the swag was flowly freely but for some reason don't remember being there. The manuals are printed on drool proof paper. The installation guide assumes that:

1. Everything goes off without a hitch
2. You're extremely stupid and need to be shown what accepting defaults all the way through looks like and assured that when it says everything is okay that it really is okay so you don't flip out and start crying like a baby

Everything did not go okay. The whole setup assumes that Sun sold you the harddrive. The installer is not capable of installing to a SCSI drive you bought new elsewhere, which will lack the special Sun disklabel written to the first few blocks. Argh! Beginning an OpenBSD install got the disklabel there. Other operating systems know how to write these but Solaris doesn't. Apparently you're just supposed to buy all of your SCSI discs from Sun.

Also, at the same time as I got Solaris 9, feeling sorry for the poor Ultra 1's, which aren't supported by Sparc 10 even though they're real, honest to get UltraSparcs, I bought one for 99 cents and got it shipped DHL for $15. Those and the Ultra 2's, which seem to be much more rare and much larger, run the UltraSparc I chip, which could be completely locked up by a malicious code sequence, so Sun decided to treat them as non-Ultra 32 bit Sparcs, which they would do in a backwards compat mode. When Sun dropped support for the 32 bit Sparcs, they ditched the Ultra 1 and 2 also. So I've got one of those here on OpenBSD 4.1, and I'm interested to see if OpenBSD is more stable on the UltraSparc than the Sparc. I think Ultras are more popular for this sort of thing now days so it might be better tested and maintained. Unlike the Ultra 5's, they're extremely well made (you'd have to see these big, solid, sturdy, intricatedly engineered machines with pretty, elaborate cables and connectors... the whole thing just screams that it cost thousands new).

Back to Solaris 9. The installer gave me little indication as to what it was really doing. It asked me the usual questions, about whether the machine was networked, whether to autoconfig from DHCP (no), the gateway, nameservers, IP, etc. Which timezone, which languages to install support for, which level of an install (minimal, end user, developer, server, or whole shebang), and then it had me swapping discs, putting in the software disc (replacing the boot disc), putting in the 2/2 software disc, butting in the boot disc again, putting in the supplamental (3/2) software disc, trying to get me to put in the even-more-software (4/2) disc (no, stop it!), rebooting a few times, and then finally letting me know it was done by giving me a login prompt. It felt more like installing Windows than Unix even though it was all done over a serial console. I'm used to Unixes just smearing themselves all over the HD as fast as the HD can write without coming up for a breath and then doing a quick reboot to the xdm login screen as if to show off how easy it was to clobber Windows into bit oblivion.

So now I'm in. It's funny that OpenBSD and Solaris both touched this same machine. OpenBSD is "secure by default" and runs little. This pigfucker has rpc running, for which an exploit comes out about three times a nanosecond and about a trillion exploits have come out for since Solaris 9 was released in 2002. This *was* more like installing Windows than Unix. Next step is to figure out how to get patches from Sun. But that's not it... there are the usual culprits, like sendmail, that I don't want running, but there are about a dozen things I've never heard of. So I fire up man on them to see what they are. And I get this:

"The DMI Service Provider, dmispd, is the core of the DMI solution. Management applications and Componetent instrumentations communicate with each other through the Service Provider. The Service Provider coordinates and arbitrates requests from the Management applications to the specified Component implementations. The Service Provider handles runtime management of the Component Interface (CI) and the Management Interface (MI), including component installation, registration at the MI and CI level, request serialization and syncronization, event handling for CI, and general flow control and housekeeping.". End paragraph.

Fuck the... what!? I'd say that was clear as mud, but it makes perfect sense. Here you have a business-speak explanation as if written by a guy in a suit who writes verbosely without meaning combined with humous levels of engineer-speak where some programmer picked comically generic terms, exactly those that every good programming style book tells you not to use in your programs, and then passed it off as documentation. A company has to be really big to generate garbage this putrid. Seriously. Fuckin' a. I bash on Linux and BSD on here, and the snot nosed brats with no concept of what "stable" actually means (hint: the Windows guys have little idea, but with Win2k compared to Linux, I can't really say that any more) that keep commiting code, but looking at this feculent dung, the underlaying forces driving people towards Linux and BSD over yet another commercial Unix become quite clear. You run this garbage because you're in a big company, life sucks, and someone tells you have to, not for any other reason, ever. Seeing this garbage bolted onto Unix, something that subverisvely pushed off the bonds of Big Company Serious System Think (tm) and did something small, elegant, direct, approachable, forward, sane, logical, and defensible, I'm enraged by this garbage. The fuckers at Sun have turned Unix into the MULTICS that Unix was laughting at. Fuck you, Sun!

I'm not done yet. I want to see if I can strip off some layers of garbage to get at a gem at the core. I guess I should ls -l / and see how big the kernel is before I get my hopes up. If I have to go wading through subdirectories, I'll be disheartened. If the init.d and /etc are more complex than Ubuntu's, I'll be disheartened.

So, here's what I've got. A Sparc 10 (flat little pizza box) on the floor, top of the case off, all 8 memory slots filled giving it 128 megs of RAM, a 4 gig Seagate ST3520N with 4.5 gigs on it connected over a SCSI-I interface, two 75mhz CPUs each with 2 megs of L2 cache, a home-made serial cable running out the back to a 486 laptop as console, and, finally, Solaris.


by scrottie at September 30, 2007 05:36 AM

September 20, 2007

Scott Walters on

The Myth of the Easy Answer and Women in IT

My friends inspire most of my articles. Thanks, friends.

One friend works at a company that built it's IT infrastructure on Microsoft products and They're adding SAP to the mix.

Friend in question is a she.

She was hired for a fairly high level position and a as you'd expect, a lot is being asked of her, but she's very smart and capable of working hard.

She keeps wonder what's she's supposed to do. This question is flawed -- there isn't some particular thing she needs to do. And even figuring out what to do isn't enough. Nor is going a step further beyond that and implementing some solution.

The problems are mostly social, even though it's easy to get hung up on the depth and severity of the technical problems.

What's needed is someone who is...

Arrogant enough scheme something up that throws out the mess that everyone is clinging to and replaces it with something else.

Manipulative enough to coerce other people into going along with a plan.

Subversive enough to ignore the official dictates from the people who created the mess in the first place.

Some jobs call for a dick, in the bad sense of the word.

You have to conduct yourself in the nerd equivalent of a football player or ape. It's you against them, and you're in charge, or you're going to try damn hard to be, unless they decide to help you, which they won't do willingly.

The mess was created by people selling simple solutions to complex problems. The primary benefit to doing this, besides people really this non-existent ideal, is that you can be nice about it. Java developers but a nice facade on the brutal practice of software development. Letting non-technical people attempt to create IT for data driven companies using Access or Excel is extremely nice -- it's completely hands off. Just give them the rope and let them do the hangin'. The Java people put a nice facade on through excessive professionalism and formality and ritual; Perl people do it through over friendliness and inappropriate levels of agreeablility. I'm not saying that if software development is treated like the brutal practice it is that success if assured, merely that failure courses aren't dealt with properly otherwise.

Oh well. That's all boring stuff.

A whole bunch of sales people want reports, often custom, on data that comes in through ftp, very un-fresh, and gets imported into and sometimes re-exported and run through Access to make reports. It's pretty manual and hard to customize. So they're adding SAP.


by scrottie at September 20, 2007 10:47 PM

September 15, 2007

Scott Walters on

Dual Xeon mail server and the CIA, a tale of a small ISP

A friend, Awwaiid (who I'm working with on the Continuity module) and a friend of his run a small hosting company, Their customers are primarily non-profits.

A while back, someone gave them two Xeon processors and they decided to use them, so they bought the motherboard and 1U case and decided to make it their new mailserver. The old one was overloaded. They'd been struggling with it for some time to get it stable. It was doing some combination of overheating and shorting but they got it into production.

Today I asked Awwaiid how it was doing:

Me: how's the insanely overpowered mail server holding up?

Awwaiid: rock solid
Awwaiid: oh wait
Awwaiid: which one?
Awwaiid: the new one?
Awwaiid: the CIA took it

Me (thinking he's joking): don't blame them. it's *highly* suspect.

Awwaiid: see... david, being, well, david ... decided to drop it in the school of comm to get some extra rsync bandwidth
Awwaiid: and someone noticed it (where the fuck did he put it, the hallway?) and tripped out
Awwaiid: NAU IT called cops, cops have a special CIA program... they decided it was a phishing setup

Reportedly, the CIA is planning on returning it (since it wasn't in fact a phishing setup). Right now, the main server, which runs shell accounts and Web administration and Web hosting, is running pivoted-root, booted from a CD since the thing decided a while ago to crash and not come back up. Happily, or perhaps less than ideally, that machine is at a colo.

I love these stories... it's like something out of User Friendly.


by scrottie at September 15, 2007 12:49 AM

September 14, 2007

Desert Dev House Social Code Fest News


Derek Neighbors edited FrontPage

Event Info
Is it a party? Is it a productive and educational event? You will have to come find out. Have fun and get things done!
Event (pdh1) (pdh2)
to all that attended the first event. If nothing else it looks like it spawned an exciting new developers group called refactor phoenix.








Creek, AZ
4th, 2007
- Midnight


Event Planners
Derek Neighbors

by (Derek Neighbors) at September 14, 2007 06:47 AM

August 23, 2007

Scott Walters on

"Is stealing wireless wrong?" needs to go away

Okay, so now I'm wasting all of my time reading Reddit. Thanks.

Ref ...

Let's try to put this in perspective.

Originally, the 'net was a few educational and research sites. But soon afterwards, it was many educational sites, and the 'net was built out largely by splitting off blocks of IPs and running a connection over to somewhere else, sharing some of your bandwidth. In the mid 1990s, the University of Minnesota had the only T1 into town. Governmental agencies were connected through them, and they started reselling it. For years, connecting other people using your connection has been how the 'net was built.

Based on this model, a whole mess of mom-and-pop ISPs sprung up, buying a 64k line or a T1 and reselling it. They catered to their advanced users because they were the majority and, for modest fees, accomodated requests such as static IPs or delegating reverse DNS for that IP. So these guys would be buying from someone else, like the University of Minnesota, selling it to you, and you'd put your whole company on a dial-up line and then set up some dial-ups for employees to connect to from home, and this was fine, good, and expected.

A few short years ago, in the late 1990s, the cable providers that were really just springing up decided that you couldn't share your connection at all. They weren't just providing you with bandwidth but somehow owned the Internet and could tell you what you could and couldn't do with it. One of the things you couldn't do, according to them, was connect more than one computer. If you had a LAN, they wanted you to buy cable Internet for *each* computer -- as if. But people set up Linux and BSD nats, and before long, commercial connection sharing devices came along.

When it came out, cable was seen as junky, second rate Internet access. Running IP over the cable network seemed like a kludge, the cable "modems" were big and klunky, and the cable companies just had the wrong background -- they were people who pushed content and decided what you would see, and they just didn't understand the idea of a peer based network. Yes, hosts on the Internet are *peers* -- if you have an IP address, you can run whatever type of server software you want, as well as connecting to servers on other machines. And worse, they blocked a whole bunch of ports, scanned you, and wrote you nasty letters if they caught you trying to run your own damn incoming mail server, like that's really going to hurt them. But then people got used to it, forgot about, and then the major DSL providers decided it would be a good idea for them to "own" your Internet, too. While the port scanning, nasty letters, and port blocking has been scaled back a bit, the telling you what you're allowed to do with your 'net connection and generally acting like they own the fucking thing has continued. ISDN was going nowhere fast, though.

Then WiFi happened, slowly at first. Sites were connecting using directional antennas and 802.11a radios. The tech got cheaper, and it was a neat but expensive toy for lighting up campus squares, and then before long, every office and home seemed to have the gear. And, of course, at first, you weren't permitted to attach any sort of WiFi to your cable or DSL connection. Sometimes they'd even come around and check and make an example of you if they found you.

So, here's what I make of this. People have a choice. We can decide that the 'net is something that we build, own, and share, and the point of the thing is just to be able to connect to each other over a neutral network that's all of ours. The name "the Internet" is besuiting such a network. For ages, it was heraloded as being "self policing", before it became big business. But it still has standards bodies and conventions suiting a peer network, like the robots.txt exclusion format, designed to keep all of the peers happy with each other.

Or, we can decide that the Internet is commercial property. In this world is peering disputes, throttling or blocking access to competition, terms of service, snooping, stiffling innovation, and so on. But most of all, you're a consumer, or a subscriber... a user. The money you pay isn't for their peering centers, the bandwidth, the local loop, or anything -- you're paying for content, content which just happens to be being made freely available by people dumb enough to still think of the 'net as a peer based network. Free content, as well as access to commercial services. Incidentally, this is what AOL, GEnie, Prodigy, etc used to be -- you paid for access to some badly collected free content and a whole lot of commercial services, each of which was extra. I think accessing the World Book encylopedia had a $5/minute added fee, back when $5 was worth something.

So, which is it? Subscription data service with access fees and central authorities who poutily look after their own interests and no one elses?
Do we imitate and therefore enable these people? Or do we continue to spread, build, and share the 'net, seeing each host on it as a peer who potentially has much to contribute, seeing other people as peers, whose access we sometimes use and sometimes use ours?

So, if you don't want to share, set a damn password.


by scrottie at August 23, 2007 06:18 PM

Why I don't do large projects

(Pasting and adapting from one of my Reddit replies, to a comment where someone decided that I must not be doing large enough projects to have reached that conclusion:)

There was this time when I redid the code side of the most recognizable comedy club chain's site while a friend cut up the graphics. We had a one month deadline. They had a big co-branded launch with a major and appropriate cable TV channel.

Their site is and was absolute garbage; the specs called for a new ticket system, contextual streaming video, photo galleries, video galleries, artist's blogs, and a pile of other crap. We hit the deadline, kind of, in that the cutup job was rough and some of the site features were hacked up very quickly (site search was a grep -i, the calendar was `cal` run through a regex to turn it into an HTML table with the numbers turned to links, etc).

And the bastards didn't pay us. It wasn't 100% what they dreamed it would be in the end, even though it was insanely much better (I know, grammar) than what they had and a huge pile of work for a month, so the directive to use as many hours as needed to hit the deadline was cancelled, after the fact. The short version is, the more the client is asking for in the beginning, the less likely is is that they'll be happy with *anything*.

Let's put it this way. You walk up to a person on the street and ask them, "what would it take to make you happy?". They might chuckle and say, "I'm pretty happy, but right now, I'd like to figure out where I parked, I've been walking around all day". They have an immediate, reasonable need and aren't leaning on you to prop up a fragile but over developed self image. Or maybe they'll say "I want a luxury yacht... no, wait, I want the *best* luxury yacht, and the fastest car, and the biggest house, and...". These are the ones to watch out for. Nothing will suit their ego. Nothing will make them happy. Even if you did somehow manage to make the best site in the world for them, a feed ego only gets hungier. They'll happily sacrifice you and their site to their ego. Telling you that it's not good enough and they don't want it any makes them feel smarter and savvier than having the thing. Perl before swine.


P.S.: To whoever leaked the URL of my previous article to Reddit, I will hunt you down and punish you.

by scrottie at August 23, 2007 03:18 PM