My current Perl Ironman Challenge status is: My Ironman Badge

Tuesday, October 12, 2010

Sensible Letter

I just contacted my House representative with the following letter. It is a more properly measured correspondence, filled with less ragespeak and more call-to-action words. I wait patiently with baited breath for my form letter response thanking me for contacting him about gun control laws. Sigh.


----

Regarding TSA experience:

Mr. Joe Barton:

After refusing to be have my body irradiated and naked pictures shown to "highly trained security professionals", I was given an "enhanced pat down" which had no patting involved, only lots of unwanted rubbing and general molestation. This is not security. This is blatant violation of my person. Then when I voiced my dissent on this process using strong language (as any reasonable person would do under this circumstance), I was "invited" to have a conversation with a manager in a suit and a uniformed law enforcement officer. At this point, I asked if I was being detained or if I was free to go. My question was not answered until after the LEO and manager arrived where they attempted to intimidate me into surrendering my first amendment protected right to free speech. I was then accused of causing a "scene" and was given the threat of "not flying." I argued for my protected right to free speech, but then realized that this was not going to change anything. I reiterated my previous detainment questions and was told I was free to go so I left immediately.

After having such a terrible experience at the hands of these poorly trained "professionals," I urge you not to further support any bills that will increase funding to the TSA for any additional staff or equipment and to vote for any bills that limit the TSA's governance. Situations like mine are not increasing airport security, they are violating individuals rights and persons.

Sincerely,

Nicholas Perez
[contact information redacted]

Monday, October 11, 2010

Fuck you TSA

I refused to be irradiated, so instead I was molested. And because I had the unmitigated gall to speak my fuckin' mind about this I was then "invited" to have a conversation with a couple of fuckin' goons. Did you know that these assholes think they are protecting this country? And that because I said "Mother Fucker" to one of them that I was causing a "scene." Of course I am you baffoon in a blue shirt. Causing a "scene" is not against the law. MOTHER FUCKER is constitutionally protected speech. And to educate TSA a bit more, saying FUCK is not the same thing as shouting fire in a theater. Learn the difference before trotting out that tired old line to justify your horrible argument for censorship. A note to anyone else ever in a similar situation: take pictures and remember the magic words: "Am I being detained? Am I free to go?" if they answer no to the first or yes to the second, just walk away. These fucktwats are simply not worth the time.

Pittsburg Perl Workshop and Stuff

So this past weekend was neat. I love coming to workshops even if it is a very compressed time frame. My take away from this workshop is that I have wanted to do some hardware hacking for quite sometime, but never got around to purchasing an arduino board with some addons. So perhaps in the next few months I'll start working on something. Something network-enabled and practical. We'll see. Not that there is any direct Perl that is running on these boards (the code is some kind of C-derivative with a compiler/IDE), but you can use Perl to talk to it over a serial port or whatever.

And while I had set off to write a POE::Filter that spoke the Minecraft SMP protocol this weekend, it never materialized. Instead, I started working on modernizing yet another one of my modules: POE::Filter::XML::RPC. The problem is that I am subclassing XML::LibXML::Element and it doesn't play nice with subclassing. So I wanted to extend it, using Moose and advise all of the methods I could find that returned other Elements or Nodes. That was going to be a very very large copy/paste job. MooseX::Declare didn't support shortcut method modifiers (around [qw/foo bar baz/] {...}). So last night and this morning, I hacked that support into MooseX::Method::Signatures (and its use is transparent to MooseX::Declare). All so I could have a single line of copy paste instead of a lot more. I'm so god damn lazy sometimes.

And interesting side effect of adding this functionality into MXMS: you can have stringified array references as method names, heh. In MXD, the declarators are given specific meaning and functionality via callbacks so that the array references are absorbed and passed on to the meta munging methods that add the method modifications.

Anyhow, I've made the pull requests to rafl, and just waiting for him to incorporate and release.

If anyone is ever on the fence on whether or not to attend Perl workshops, let me settle that for you: GO. It is an awesome time. Everything from learning to networking opportunities await any avid Perl programmer. And this gave me ideas for next year's set of talks that I need to write (I didn't speak at this workshop, I was all tuckered out from speaking earlier in the year).

Hope to see new faces at OPW in January.

Thursday, October 7, 2010

Of Modernizing My Own Modules

It is amazing how much bitrot accumulates over the course of a year and a half. And also amazing to see the state of the art move so rapidly.

Dist::Zilla has a break-neck pace of development. While my old dist.ini files still run without issue, so many new and awesomer features have crept into the core. This makes it worth while to revisit your dist.ini files even if changes are minor in your project. AutoPrereqs is smart enough now that only a few MooseX::Declare edge cases are missed. PodWeaver is just plain awesome and if you aren't using it to generate your POD something is wrong. The introduction of the Basic PluginBundle means you have even more fine grained control over your distributions. If you are looking for a basic template of a dist.ini that you can use in your own projects, take a gander at mine http://xrl.us/bh3y2k (Link to nickandperla.net). And if you've looked at my documentation for modules and like my POD style, here is my weaver.ini too: http://xrl.us/bh3y2x (Link to nickandperla.net)

And while I like modernizing my code, this latest push for several modules wasn't entirely voluntary. Newer perls have broken MooseX::Method::Signatures (and likely Devel::Declare itself). This affects a large chunk of my code as I have grown very accustomed to the idea of having constraints on my methods. The brokenness lies in the parsing itself. Seems it doesn't like there to be a newline between the method declaration and the opening brace. In other words, to avoid this problem you have to use ugly K&R-style braces. So that is what I have been doing for a lot of modules, moving the braces.

But, other modules, it was good-old-fashioned modernization. A new version of POE::Filter::XML was released last night that brings it into the modern era. It was an old module in the sense that the distribution still had distribution artifacts stored in source control. POD was all done by hand and at the bottom of the files. POE::Filter::XML::Handler didn't even have any documentation. I wrote my own god damn accessors. It was ugly.

So I put in the time to make things right. It should be backwards compatible (to an extent). Node was updated to be a proper Moose class (using MooseX::NonMoose::InsideOut) that extends XML::LibXML::Element. This lets us, among other things, override methods and call super() when appropriate (making sure that methods that return Elements actually return Nodes). All of the custom constructors went away. And the logic was greatly simplified by using attributes with native traits. The code simply /looks/ modern.

Next on the chopping block is POE::Component::Jabber. I plan on stripping the functionality down to a mere Role. Also, I am going to remove support for pre-XMPP connections, and server specific connection types for components. I want a mean, lean Role that can be composed cleanly and easily. I also want it to be easily extensible too, so if other enterprising developers /want/ those other connection features, they can implement them and use them without monkey-patching my code.

In other words, POE::Component::Jabber is disappearing and will be replaced with POEx::Role::XMPPClient in the not too distant future.

So don't be afraid of looking back on your modules and vomiting a little in your mouth. It doesn't take /that/ much time to modernize them. That way the next time you want to use something and find a bug in it, you won't cringe when you need to fix it and do a quick release.

Thursday, June 17, 2010

Of Perl Jobs

So I've been on the receiving end of a couple of unsolicited emails regarding open positions in a couple of companies. And it is odd to get these kinds of emails, especially when they aren't from recruiters, but from the hiring manager themselves. It is a little flattering, to say the least.

That said, the one thing these emails/companies/etc have in common is that the culture that makes me the most productive just isn't there, mentioned, or encouraged. Now, one could say my workplace is a little unorthodox since we are all virtual, but in the grand scheme of things, it isn't the virtual part that matters the most. What matters the most is that our company isn't isolated from the community.

We actively participate on CPAN, IRC, message boards, and most of the major modern Perl projects mailing lists. In our day to day, our communications circle encompasses the community. We contribute to important projects. We release generic solutions to problems we've solved in the course of our business. And we make use of others' solutions as they have released them. Frankly, I don't see how we could get much done if we /didn't/ invest as heavily in the community as we do.

So when I get emails or see job postings from companies that fail to mention any level of community participation, they are placed into the round file holder. And I am not alone in this. The people that I would consider my peers, other community members, the "rockstars" for which these job postings are seeking, aren't about to jump ship from a culture of shared commons and productivity to one of isolation and take-but-not-give-back.

So in the future, if you recruiters or hiring managers wish to cater to truly senior level Perl developers, Perl developers working with the state-of-the-art, please take the time to understand that there are deeper motivations than merely salary, location, or even the business domain.

Thursday, June 10, 2010

POEx::WorkerPool and Poke

I am pleased to announce a new release of POEx::WorkerPool and a new TRIAL release of Poke. POEx::WorkerPool is a multi-process framework for implementing the worker pattern in Perl making use of POE as the backend. All you need to do is compose the Job role and you're golden.

Poke is a monitoring framework that uses POEx::WorkerPool for its foundation. Poke gives you the ability to simply compose it's Job role, and add the job's configuration to the config.ini. Your jobs can do anything. Need to run a complex query once an hour to make sure a queue is getting cleared? No problem. Want to ping a server to see if it is alive? Easy. By default, Poke comes with simple HTTP Job that merely hits a URI to see if 200 is returned.

Poke wouldn't be much if it was just a subclass of WorkerPool, it is much more than that. Each job status is committed to a database along with start and stop times. The status of jobs can be viewed through the embedded web component that lets you view the last status result, and the previous 10 for a particular job. The system itself can be meta-monitored by tailing the syslog output. And just about every part of the system is configurable from the number of worker processes, to the level of output to syslog.

Configuring jobs is also very easy as well. Attributes in the job's config correspond to actual attributes on the job class. Your config can contain any number of uniquely named jobs each configured to fire on it's own frequency.

This is another TRIAL release because I haven't written proper tests and fully documented the guts. Some things are still in flux as well when it comes to the embedded web portion. There are some features in HTML::Zoom and Web::Simple that haven't reached release yet and I didn't want to depend on git versions to run this release. Ultimately, I'd like to be able to control job execution from the web portion (pause, one-time run, delete, add), but I am not sure that will be in the initial finished release.

Please feel free to download the TRIAL release and play with it. It comes with an example config. From the project directory, simply run perl -Ilib bin/poked --config example.ini --no_fork 1 and monitor daemon syslog output, and connect to http://localhost:12345/ to see the web output.

Friday, May 21, 2010

Iron Man Fail + Xenoterracide is a whiny bitch

So, I have failed. I failed to post within the given window for the Iron Man challenge. There is a reason for this: I lost interest. And I generally lost interest in the Perl community for a few weeks. But, I only did because I had other interests and activities consume my time like archery, fletching, and starting my own business. That said, I felt it was important to let you, my dear readers, know that I didn't die and that I will continue to blog, but perhaps not to the frequency required for the Iron Man challenge.

Now on to other matters. Lately, a tiny subset of the Perl community is in an uproar over Xenoterracide's blog and for good reason. He is a whiny bitch. I find it laughable that Caleb feels entitled to our labor. The whole kerfuffle started when he bitched about rjbs' wonderful Dist::Zilla. His complaint was that it lacked documentation and felt that rjbs owed it to him or he shouldn't have released the software in the first place. In Caleb's world view, all released software should have documentation and that he shouldn't have to lift a finger for a project he finds useful. Then when people rationally explained that he ate too many paint chips as a kid and that isn't how the world works, he then decided to stick his fingers in his ears and start shouting "I'M NOT LISTENING LALALALA." Now he is hellbent on convincing people that his delusion is how the world actually is. So with all this back and forth, what has Caleb actually contributed? Nothing.

People, this is what we call a leech. Now here comes the philosophical rant.

I've been working with and developing Free Software for a number of years. I generally believe in the share and share-alike principle when it comes to Free Software. I release my code under GPL and GPL-compatible licenses because I feel the solutions are general enough that others may find them useful for their stated purpose. Others do the same. And in the end we build up a giant collection of useful software that we can all view, modify and redistribute our source modifications. In business, this makes sense. As an industry (software development that is), we've developed a set of tools to accomplish the most generic goals that we all have. This allows us to successfully focus on the needs of the business in developing our solutions and saves costs on redeveloping the wheel the next time we need an asynchronous event framework like POE. We are a community of peers. We participate in a commons toward shared goals. I stand on the shoulders of giants when I contribute, as do many others. It is a great system and it works really well. But, there are a few bad apples. The leeches.

Leeches drain resources from successful projects with their inanity because they do not contribute any useful labor back to the project they are using. These are the people that bitch and whine in IRC without putting forth the basic effort to understand the project and its purpose. These are the people that bitch and whine in their blogs, goading responses from the community at large-- a community of peers that feels the responsibility and honor that comes from being a constituent.

Caleb, it is really quite simple. We are not here to please you. Free and Open Source Software is a meritocracy. If you take and take and take, but give nothing back, guess what? You have no standing. We are not going to move mountains for you or spoon feed you bite sized chunks of understanding because you are too fuckin' lazy to read the source. If you do not demonstrate a willingness to be part of our community, then get the fuck out. We have better things to do than coddle leeches.

Thursday, April 29, 2010

Other languages (specifically .NET languages)

Lately with work, I've been busy in some old school .NET 2.0. Until you work with something different, you never really realize how easy it is to do development in Perl. My top three reasons why I enjoy Perl over other languages:

1. Built-in complex data structures. I don't need to invoke, import, or compile any additional dependencies to gain lists, arrays, and associative arrays. This is big. Most Perl regulars don't appreciate their hashes and arrays until they are once or twice removed from the core. .NET has these data structures of course, but I have to be "using" the Collections namespace, instanstiate full blown objects, and deal with a lack of literal construct for them.

2. Transparency. Much like mst does and recommends, I, too, read the source code of modules before I use them to make sure they are inline with what I could consider good coding practices. This means when I have a problem or a bug with a module, the source to that module is really close at hand. This goes the same for what modules are considered "Core." If I believe there is a problem with how I am using a module, and the documentation is ambiguous, I can always pop open the .pm and figure out wtf is going on. This is much harder to do with the class library with .NET. Sure you can install a reflector and look at generated-from-the-bytecode source, but you lose quite bit of information doing that.

3. Platform choice. When we do Perl projects at work, we cover a pretty good spectrum of platforms. Some develop on OSX, I develop on Debian, and we've typically deployed to FreeBSD. Our codebase is typically git friendly, dependencies easily installed with cpanm. With .NET, I pretty much /have/ to develop on windows and deploy on windows and typically use something like svn, or TFS as a source code repository. Mono apologists need not comment. Oh sure you could whip out the cruisecontrol.NET and write your own pull scripts to pull shit out of git (with msysgit), but good luck getting that to fly with any customer that is in love with centralized source repositories and has a vested interest in maintaining them (TFS licenses aren't cheap). So what does that mean for me? It means dealing with the headache of doing development inside of a VM since I do not run windows on my machine for security and performance reasons.

Overall I prefer the development of projects in my beloved Perl. It doesn't mean I can't do development in other languages, I have bills to pay after all. It just means that I find the .NET environment to be rather limited and extremely monolithic. And good luck doing anything outside of the status quo, or poking at APIs that are rarely used. Down that path lies dragons.

Monday, April 19, 2010

YAPC::NA

This year is going to be very different.

It seems that every talk that was submitted was accepted. I submitted five talks. I'm going to be rather busy this year.

Typically, in years, past, a single talk or maybe even two were selected when speakers would submit multiple. Also, there were a limited number of time slots for presentations. This limited the pool of speakers significantly and generally provided a great focus to the event.

This year, I worry that there will be an overwhelming amount of material presented.

So we will see. Luckily, a couple of my presentations are repeats from previous workshops this year. All in all, besides my concerns, I am extremely excited to see everyone. By far, this is my favorite time of year. Everyone comes out and shows off their awesome projects they've been working on.

Anyhow, next week, you'll see an update on Poke! The developer release went smooth, but I still have a lot of work to do in terms of docs and tests.

Friday, April 9, 2010

Rewriting POE::Component::Server::PSGI

So in following up with last post about TraitFor::Controller::Ping, I've been working on a monitoring framework and daemon. But today's post isn't about that, but about one of the sub components of that project, mainly the embedded HTTP server.

I discussed last that I wanted to make POE::Component::Server::PSGI a bit more resource conservative and basically pair it down from the POE::Component::Server::TCP it was using (which creates a new session for each socket connection), down to its constituent parts, mainly SocketFactory, and manage the ReadWrite creation myself. In my monitoring framework I want to display results via a little Web::Simple magic, but I want the app.psgi to take advantage of the frameworks tools to connect to the database and make use of the Schema class. I also wanted to save all of the work from having to open yet another database connection just for the web app. So how do I pass those structures along so the web app can use them?

It turns out that POE::Component::Server::PSGI doesn't really provide any means in which to hook into the process to provide any customizations. This is bad if you want to provide an application specific micro-framework to your web app. So I had no choice but to start working on a fork: POEx::Role::PSGIServer

First step in the process was to basically start over, but reference frodwith's logic as much as possible. To that end, I started with modern POE tools, mainly, POEx::Role::TCPServer which does exactly what I describe above: a single session managing a collection of wheels (including SocketFactory).

The PSGI specification also provides a mechanism for streaming content via a filehandle. And to support that mechanism, I wrote POEx::Role::Streaming, which will take as arguments, two file handles and stream from one to the other. This encapsulates the typical pattern of streaming and since it is a role, it is easy to consume and override with implementation specific details (which is required to support chunked transfer encoding).

Then there needed to be a lot more validation of parameters. So POEx::Types::PSGIServer was written to do some light validating on the various data structures passed around inside the role.

Lastly, I needed to split out as many of the closures as possible into real live methods so that I could either advise them or override them. Not only that, but also break up the larger pieces of code into smaller bite sized chunks to allow for maximum customization (ie, what to pass when converting the HTTP::Request into PSGI $env hash).

All of this is a net win. I am not quite finished yet, but you can take a look at what I have so far at at github: poex-role-psgiserver. Docs and tests will be written next. I'll probably cargo-cult frodwith's tests as much as possible to save some time. Expect a release very very soon.

Anyhow, a BIG thanks to frodwith for his work on POE::Component::Server::PSGI. Next time you see him, buy him an alcoholic beverage of your choice. I know I will.

Tuesday, March 30, 2010

Catalyst::TraitFor::Controller::Ping

So this is a little silly, but I needed a simple method for "pinging" a web app to see if it was up and running. And since for work, this one $client has a whole hell of a lot of apps to monitor, I didn't want to have to script for a bazillion different ways to access these apps. So what is my alternative?

How about role for the root controllers? And so Catalyst::TraitFor::Controller::Ping was born.

That is my solution. And it takes a couple of different configuration options if you want to do a little more than just spit back an HTTP 200. Give a model name and a method name, and it will attempt to gather that model and execute the method name. If it is successful (meaning, it doesn't cause an exception), then you still get back a 200. Otherwise, the status will be 500 and you get the catalyst error page.

I'll talk about the other end next week, but basically I am writing a daemon that takes advantage of this ping role to check statuses, and logs them in a database. And you want access to that information? Well, the third piece is a little Web::Simple magic to write a web service that spits out some JSON. Ideally, I'd like to make POE::Component::Server::PSGI much more resource conservative, and have the daemon+webservice all one thingy, so we will see.

Saturday, March 20, 2010

Modules that need love

So I have been busy with work lately and I haven't had the drive or the time to work on some of my modules.

The other day in IRC, someone mentioned that there were problems with POE::Component::PubSub. And I readily admit that there are likely problems with that module. I had already replaced it with POEx::PubSub. But the problem is this: POE::Component::Jabber was scheduled for an overhaul last summer but I was suddenly employed and my work turn a turn toward something else even more awesome. PCJ's dependencies where the first to go through the conversion largely because I needed the new PubSub while working on the POEx::WorkerPool.

All that said, I never got around to actually taking the plunge with PCJ and doing the needed conversion to Moose and Roles.

So. Some of my modules need some love. POEx::Role::SessionInstantiation specifically was getting some fail reports. That ties into a whole bunch of other modules. Speaking of fail reports, my MooseX::CompileTime::Traits was showing failures on 5.11.x. I need to investigate if I still want my modules to run cleanly on 5.12. POEx::ProxySession particularly could use some real love because I really believe it is the way forward with distributed modern POE versus using old-school IKC.

And I really need to follow through on my promises on POE work with specifically taking the time to being Reflex to my will. This would also include organizing talks for the YAPCs this year.

Anyhow, with a hopeful lull coming up in work soonish, I might get the time to improve the state of the art. And maybe even take the time to take ownership of that module that mst was pushing some weeks ago. I know I owe him a test for Web::Simple that involved an odd combination for the signature of a particular dispatch.

All Perl modules need love. I encourage people that read the Ironman feed to step up and spread some love to your favorite modules. Anything from doc patches, to tests, to feature implementations, no one I know ever turns down work on their projects. In the end, we end up using each other's modules to deliver high-end performing solutions to our clients and partners. Let's make sure we give back.

Wednesday, March 10, 2010

Of Exquisite Nerd Hackery

DISCLAIMER: The following post is for educational purposes only. This post details circumvention of a registration function in a popular flash game. The author does not condone copyright infringement. The author paid for the game and encourages others to support independent developers.

And so our journey begins. Robokill is an awesome game. It is like Legend of Zelda with guns. I sunk some time into it last night. Just ask my wife, I was cursing up a storm because one of the levels was being a bitch to finish. In other words, I ♥ Robokill.

But then came to the end of the demo. Up popped the REGISTER ME screen in the game asking for an email address. Hrm. Odd. So I plunk down "test@test.com" just to see what would happen. It failed obviously, but Firebug happened to be on and I saw just exactly what kind of request was being sent and the response it was getting. It was just a dumb HTTP GET with the email and a salt in the URL like so:


And the response as just plain text:
test@test.com is not a registered email address! caccabad


Wait. What? That's all? The gears started turning at this point.

What if I could somehow subvert that request and return a valid result? And what does a valid result even look like? First thing is first, I need to make sure DNS points to somewhere I control.

I logged into the local fileserver. Here at home, I am running dnsmasq which is an awesome little utility that provides DHCP services, and local DNS + forwarding. I added an address for www.rocksolidarcade.com and point it back to this machine.

Next step was to actually respond to the request. This is when I thought of mst's wonderful Web::Simple. So my first attempt at returning a result was this:


So how did I run this? I mean it looks like a dumb CGI script. Easy. Plack. I simply said:
sudo plackup -p 80 hacks.psgi


(I know running on port 80 while sudo is fail, but remember this is a quick hack. A better solution would have been for me to write some iptables rules to send the traffic to a non-privileged port)

As you can see, I naively thought that perhaps the server was simply hashing the result and returning it. It would need to be something the client can do too. That failed. So I tried other combinations of things and ultimately wasn't able to make any headway.

Then another bright idea came to mind, what if I decompiled the .swf and peered inside the action script to see what it doing? So I downloaded a couple of flash decompilers and installed them in a windows vm. The first one was lame and wouldn't let me see the action script at all without paying (har har). The second one was much more generous though. It let me look but not copy the code. WIN.

So take a peak inside and what do I see? Something like this:


Nuh-uh. Really? That dumb?

So I adjust my code like so:


And like magic it works.

The last step in the process for me is to be able to play the game offline. So I try to load the .swf directly in the browser. So far so good. Even the register check still works. But when I go to press "Start" it wants to popup a window and take me back to their website. Well that is dumb. I want to play it offline.

So back into the decompiler I go and I find another tidbit that is explicitly checking the URL for their domain name. Huh. So I adjust my Web::Simple app one last time to search up the file directly:


Now it works whenever I want. But was it really worth my time and effort? No. All in all, it took me about 1.5 hours from start to finish (knowing nothing about Web::Simple, Plack, and futzing with flash decompilers). It would have been much easier to just go get the credit card from the wallet in the other room and pay them the ten bucks first instead of showing off my 1337 skillz. That said, this morning, I did pay them for their wonderful game:

Tuesday, March 2, 2010

Encouraging Contributors

I think one of the most awesome things about doing open source/free software work is working with others toward a common goal. Of course, when you hold the reins of a projects you can't do everything to please everyone. So the simplest course of action is to encourage people to write failing tests for the functionality they would like to see.

This is surprisingly effective for getting regular, active contributors. Of course, once the test is written, and the contributor is confident it tests what they want it to test, it is a simple hop to peering into the code itself and making their own test pass. And usually all that takes is guiding the new found contributor in the right direction: "Oh, your testing for stuff in Flarg.pm, you know, you could probably add that easily if you do X".

I think that near-instant gratification is important and it makes herding the cats that much easier. It makes my life that much easier and I am sure I make other people's lives easier when I participate in the same fashion.

That said, Perl has a great testing culture developed that makes it very easy to cast the "Write me a test" net, far and wide. Plunk down a new t/foo.t, and fire it off with prove -l. No need to spin up a gigantic harness to make the magic happen.

mst is (in)famous for getting work out of people by suggesting projects and ideas to others. Even going as far as private messaging me on IRC about an orphaned module that could use some POEx::SessionInstantiation lovin'. And his method for encouraging contributors is equally as valid and important as well.

In the grand scheme of things, we are all trying to make software development easier. Perl, CPAN, its culture of testing, the community at large-- It all just makes it that much better.

Saturday, February 20, 2010

Some DBIC::API changes in the pipe and other ranblings

I heart git. I really do. It enables a great development paradigm that allows for easy collaboration for projects with developers all over the world.

That said, there is a branch that you should be tracking for DBIC::API, specifically, the object_split branch. Apparently abraxxa needed some functionally that I had mostly written out when developing the 2.x release. He added it back in. And for the most part, it is clean. I still need to push my clean up changes up to the catagits repo, though.

Abraxxa also added the functionality to specify a data root for a single object. ExtJS specifically has that part of the Action.Form uncustomizable (which will likely be fixed at some point). That means it expects the result to always be in "data" which doesn't jive for being able to customize the result key for things like restful stores.

Also, rafl has been developing a better way to do the dispatching for multiple vs. single objects. Right now it pretty much forwards to objects, and then forwards on to the intended action. This is fail and breaks chaining (as in the case of single objects, and addressed by the object_split branch). Ideally, we'd like to implement a chain that works conditionally. A -> (B || C) -> D -> (E || F). This would allow us to keep the same singular functions in place (update_or_create) that operate on a list of objects, but the prep to get to that point would be dependent either captured args or whatever.

It isn't there yet, but we will make it work. We want this module to be as flexible as possible. And we are doing that.

DBIC::API is being used, right now, in production, for various applications including ExtJS (which is the biggest driver of our development right now). It should be more than suitable for your basic CRUD tasks. Ideally, we would like to gain more developer support for other systems that speak other dialects than REST and RPC. Your contributions would very much be welcome.

We have a lot of momentum at the moment in making this project the best we can. At some point, I'd very much like to push to have it included in various Task projects (Kensho any one?), but that is a little ways away.

In other news, it is starting to look like I will be attending YAPC::EU. It isn't final yet, but the finances are in place to execute. This means I need to get the ball rolling on writing abstracts, and doing talk submissions for both YAPC::NA and EU (of course, I am going to NA!).

If you have never attended a Perl workshop or conference, I highly recommend that you do. It is where the vast community gathers to share the latest and greatest in ideas. Some are more practical. Some are more theoretical. But all of them good. You can learn a lot about modern Perl practices, talk directly with core developers of key technologies such as Moose, Catalyst, POE, etc., and get a feel for the people that come together to solve these common problems. It is enlightening.

Wednesday, February 10, 2010

POE and its infinite awesomeness

For the past couple of years, I have been one of the very few to actually mention or discuss POE specific technologies at any of the various Perl conferences or workshops. This makes me sad.

POE is a fantastic framework with a very well defined and mature API. As a project it is... wise. It isn't old. It isn't dead. I would argue that POE is one of the longest lived, and thriving Perl projects on CPAN. And in fact, POE handles many mission critical apps on the backends for several companies. You just don't see it.

With the advent of newer modules that claim to be the messiah of asynchronous events, POE has developed a sort of marketing problem. POE doesn't release as often because of how mature the framework is. In addition, several other factors contribute to POE not grabbing the spot light. This leads people to believe POE is old school, or that it is too big to be used for their one off projects. I'd like to fix that. I'd like to educate people on why POE is very much still alive and what that means when you have asynchronous requirements for your heavy-lifting backends.

That said, this YAPC::NA, I am attempting to organize enough speakers to speak on the topic of POE (Intro, Core, Extentions, Next-Gen POE technologies, Application specific successes) that the conference organizers will have no choice but to give us a significant block of time on one of the tracks. This will involve a lot of work. There is a core group of us meeting in irc://irc.perl.org/#reflex to make this happen. We have a lot of code to write.

So please, if you use POE and have an interest in seeing the next evolution of POE work, come join us. We won't bite. Promise.

Tuesday, February 2, 2010

DBIC::API Update

Be prepared, folks. The next installment of DBIC::API will be backwards compatible breaking. This means if you were relying on the old behavior of manipulating the stash for large swaths customization, your customizations will no longer work. The version number will jump significantly in case this isn't warning enough.

So far, the Changes will look something like this:
  • Merge create and update into update_or_create
  • object is much advanced now:
    • Identifier can be omitted, and data_root in the request is interpreted
  • Because of the above one object or several is now possible for update or create
  • Create and Update object validation now happens iteratively
  • Creates and Updates can be mixed inside a single bulk request
  • All modifying actions on the database occur within an all-or-nothing transaction
  • Much of the DBIC search parameter munging is properly moved to the RequestArguments Role in the form of a trigger on 'search' to populate 'search_parameters' and 'search_attributes' which correspond directly to ->search($parameters, $attributes);
  • Error handling is now much more consistent, using Try::Tiny everywhere possible
  • Tests are now modernized and use JSON::Any
  • Extending is now explicitly done via Moose method modifiers
  • The only portion of the stash in use is to allow runtime definition of create/update_allows
  • list is now broken down into several steps:
    • list_munge_parameters
    • list_perform_search
    • list_format_output
    • row_format_output (which is just a passthrough per row)
There will likely be a couple of more bullet points, but as can be plainly seen, this is a BIG update. I hope to have the tests and the distribution ready to ship to CPAN tomorrow late in the day (it is still sitting in my local SVK repo)

This update will bring DBIC::API to the next level in terms of using it as a web service, with more functionality built into the core by default.

If you happen to be attending Frozen Perl 2010, I'll be giving a presentation on Catalyst datagrids, specifically my melding of ExtJS with DBIC::API and how dumb easy it is to hook the two together now (which makes my work life much simpler).

Anyhow, if you are still doing lots of heavy, custom CRUD exposed via web service, I hope this update will make it more appealing to switch to DBIC::API to handle the more mundane parts.

Monday, January 25, 2010

YAPC::EU and course ideas

After spending time at OPW2010 with many wonderful people, I was exposed to the thought that YAPC::EU was in fact more better (double comparative), than YAPC::NA. So I decided to look into what exactly it would cost to spend a week in the wonderfully historic Pisa, Italy (where YAPC::EU::2010 will be) in the first week of August.

Yikes.

Airfare alone is frightening.

But I am not going to give up on the idea, at least, not until I consider other possibilities.

In IRC tonight, I had the spontaneous idea of perhaps putting together a course with the complete kit of course materials, exercises, etc. My idea of the class goes as such: Modernizing Legacy Perl Applications. Basically, we would explore a pre-modern Perl app (CGI.pm, raw blessed hashes, raw DBI, raw forking, etc) and modernize it over two days (16 hours total). Overall, the class would provide a practical crash course in various aspects of modern Perl (and lots of modules from Task::Kensho) and also a general outline for students to use when implementing their own revitalization project. It would be a tight squeeze for the time allotted, but so far, I haven't seen any other classes take such a tack for introducing modern Perl.

While classes that explore Moose and Catalyst at from the beginner and up level, no one seems to put it all together. I'd like to do that. And I'd like for it to be a source of compensation of costs to attend the wonderful YAPCs abroad.

Right now the course idea seems like an insurmountable task, but with a couple of interested parties willing to invest time into resources development and teaching, it is very feasible.

So we will see. I may see a leaning tower yet :)

Saturday, January 16, 2010

OPW2010

What an incredibly awesome conference! Kudos to Chris Prather (perigrin) and his lovely wife for organizing this event. It has been a big success in my book and many others. My talk went well and I will publish the slides and code to github soonish.

Tomorrow is the hackathon gathering where I hope to dig into updating some bitrotten code of mine. And also perhaps taking a look at metaclass serialization.

Short post. More tomorrow perhaps.

Thursday, January 7, 2010

More DBIC::API

I recently added a new feature to DBIC::API. You can now specify "as" as passed to ->search() that is a complement to the "select" attribute. This doesn't produce a true "as" because the DBIx::Class docs only say it is for internal access, but what it does produce is an accessor of that name (use get_column with specified "as"), to get at columns that are actually results produced from DB functions such as COUNT. This means you can specify:

?select.0.count=some_column&as.0=my_count

using CGI::Expand syntax to produce the right arguments to ->search

{ select => [ { count => 'some_column' } ], as => [ 'my_count ] }


This is rockin.

As funky as the CGI::Expand syntax is, it is incredibly flexible. This one minor feature let me avoid a more complicated setup for work and instead rely on what DBIC::API is good for: exposing DBIx::Class via web service.

I'll do another release of DBIC::API soon.

Sunday, January 3, 2010

New Year Hacking

So in my last post, I made mention of brokenness in DBIC::API. Specifically, the prefetch_allows only considered hashes with only one key. To my defense, I didn't have any more of a complicated use case. Also, other use cases, in and around the internets and even the docs for DBIC itself, don't include those kinds of complex prefetch. So, I didn't consider that someone might want to prefetch/join on multiple keys like that.

Mea culpa.

So, my latest commit to the DBIC::API repo, on trunk, makes abraxxa's modified (and initially broken) test pass. First code of the new year! When abraxxa gets back to me, I'll do another release for general consumption.

Some other random news: I'll be talking at the Perl Oasis Workshop on workers, job queues, etc and how POEx::WorkerPool is your solution to making things easy. Also, I will be giving a short talk at the Frozen Perl Workshop making use of my recent hacking on DBIC::API and ExtJs for doing datagrids.

Before the latter talk, I hope to have some up on CPAN that is a ready-made datagrid that you merely plop in and configure up. I had written about doing a datagrid like that since last year, and ultimately had to delay delivering on that promise until I had made my certain advancements in core technologies. Now that I am getting DBIC::API how I like it and I already have it talking to ExtJs's RESTful Stores, without custom query code.

Anyhow, I am excited to get back into the swing of things for work and excited to meet all of the smart Perl peeps you always meet at these kinds of conferences.