My current Perl Ironman Challenge status is: My Ironman Badge

Saturday, October 24, 2009

Refactoring POE::Component::Jabber

So tonight, I've started to refactor POE::Component::Jabber to make use of updated tools, including ones that I have written. That list includes everything from MooseX::Declare to POEx::Role::TCPClient. The following is a brain storming session.

The first step to is break the machine out of its mold, so to speak. Initially when I embarked on the 3.0 design, I recognized that there is nothing but an artificial limit imposed on multiple connections. Really, there should be no reason not to enable multiple connections. By using a level of indirection via POE::Component::PubSub, I could have delivered all of the pertinent state and connection information along with the received packet. But the changes introduced into 3.0 seemed a little overwhelming. I didn't want to fundamentally alter the behavior of one connection manager per connection. Now I want to change that. Having developed POEx::Role::TCPClient for the possibility of multiple outbound connections with no arbitrary limits, I solved the multiple connection problem. And in fact, consuming that role wipes out large swaths of wheel reinvention code.

And I think I could take that a step further and even allow a different connection type on each connection. Right now this is handled per instance of the connection manager with arguments passed to the constructor. The connection type determines which "dialect helper" gets instantiated and how the session is constructed. Currently, dialect helpers are a poorman's POE "role" in the sense that I hacked together an API for crudely introspecting which events the helper exposed. Then I used that method when creating the main PCJ session. The right way to do that is to instead is to have a /real/ Moose::Role represent the given functionality and consume it as needed.

Even then, that isn't enough. The dialect specific portions are only relevant during the connection negotiation phase. Once the connection is negotiated, those bits are never used again. And that leads me to think that perhaps I need to abstract that away and use a separate class for making connections. Then at the top level, PCJ merely delegates between a couple of different objects, mainly the connection maker, the pubsub component, and a controlling session for it all, but provides a uniform API for communicating over XMPP.

So we will see. I am going to work on it and see where it takes me.

Sunday, October 18, 2009

Of Module Evaluation

One of the reasons I enjoy being involved in the Perl community is knowing that I can help people directly when it comes to evaluating my modules.

Recently, stephan48 in irc.perl.org/#poe needed to consider different options for doing some kind of multiprocess worker solution. A number of options were presented to him and I was lucky enough to garner his interest in trying out POEx::WorkerPool. It was a great learning experience in how people work through a distribution for evaluation.

The first thing he tried was to copy/paste my synopsis directly and run it. It had compile problems because I had made certain assumptions. For the most part, I wasn't expecting anyone to actually try to run it. In my defense, I consider the synopsis to just give a broad overview of the structure of use; not a full blown example useful for tweak-n-run solutions.

Second, he had no experience with the high level of tools on which POEx::WorkerPool is built. That meant that MooseX::Declare was foreign. The second stab he did at the code was attempting to shift $self from @_ which was giving him problems.

The third problem he encountered was the advanced and little used feature in POE that all exceptions in events are captured and propagated as a signal and as such requires a signal handler. Otherwise, the exception is devoured without so much as a peep to the developer. In my synopsis, I hadn't used all of the proper robust exception handling required to make sure things went smoothly. So in his attempt to simply have one worker with multiple queued jobs it failed because the method used immediately started the worker and prevented further job queue pushing. That failure was delivered as an exception. And to him, it was as if the magic wasn't happening at all. Only a single job would run.

All in all, we got him going eventually, and he came to the conclusion that perhaps this module wasn't the right tool for his job after all. And that's awesome that I was able to help him figure that out. I recognize that the tools that I build and use for my own projects may not be suitable for others. And while I see many advantages to using my solution for a fully scalable and maintainable system, sometimes, all you need is a little POE::Wheel::Run action to grease the wheels.

The upside is that I will be revisiting the documentation and perhaps build a mini-manual similar in structure to the Moose::Manual and the POE wiki. The casual developer has a problem to solve and doing 80% of the work in a well documented scaffold example really makes a difference.

Saturday, October 10, 2009

MooseX::CompileTime::Traits

In developing POEx::WorkerPool, I did a lot of abstraction in order to make it easier to scavenge pieces of it for other uses, and also to allow customization on every level.

Initially, this meant that the classes inside POEx::WorkerPool are simply bare. They consume roles that contain the actual implementation details. This ultimately lets you consume those roles in other projects, gaining full functionality, without subclassing. The second feature of the bare classes is that they contained an import() method that did some magic to allow you to specify traits for those classes. Say you wanted to alter the behavior of how WorkerPool does its queing? You could simply write your own Moose::Role and pass it to 'use' to gain a global altering effect. Rockin.

But, it had its shortcomings. For one, I was doing all of the parsing logic, validation, and role application (with 'with' no less). This broke in several cases. Two, this was the second project to gain this ability, with POEx::Role::SessionInstantiation being the first. What I needed, was a proper encapsulation of this functionality.

And so MooseX::CompileTime::Traits was born. Now, some of you may be asking, why not MooseX::Traits::CompileTime? My ultimate reason for not doing that is that I didn't want people to confuse my module for a subclass of jrockway's module. MooseX::Traits applies traits at runtime using a custom constructor (new_with_traits()). So that means, your traits are actually on a per-instance basis. MooseX::CompileTime::Traits, on the otherhand, affects things at the class level. Globally.

I heard some grumblings that this might be a bad thing, but hear me out. While POEx::WorkerPool had multiple levels of abstraction applied to everything from the subprocess, to the worker in charge of it, to the pool itself, there was no clear way to tell it to do something different. To do that, I'd have to subclass up the chain of things inside POEx::WorkerPool to get the custom behavior I need in the lowest of levels. Perhaps that is a design issue, but the simple solution is to simply apply a trait at compile time without having to subclass a thing. I am not wanting this behavior to be selectively applied to some instances and not others. I want to change the behavior on all instances so it fits my needs without subclassing the entire project, basically.

And I needed to do this for work. This daemon that I am working on needed to do some initialization after the worker subprocess had forked. With MooseX::CompileTime::Traits though, it became easy to provide a role for the GutsLoader to advise some of the default behavior to do what I wanted to do. Now I can successfully invoke code after the fork has taken place without a large amount of work (the role to do this ended up being 10 lines).

So, if you want to give your classes (including the internals) the ability to absorb outside behaviors so that people can customize them while maintaining a very loose coupling, give MooseX::CompileTime::Traits a looksee.

Saturday, October 3, 2009

Production Use

So I committed a mortal sin and have been reading Slashdot comments in the recent perl-5.11.0 announcement posted there. And like all conversations regarding Perl in the recent time frame, conversation inevitably drifts toward Perl 5 vs. Perl 6. And what I am finding in a lot of comments, not just on Slashdot, is this unhealthy apologist attitude in supporting Perl 6 by saying it is production ready, etc.

My beef is with this one comment: http://developers.slashdot.org/comments.pl?sid=1391409&cid=29627915

"Really? Can you give examples of problems in Rakudo that would stop it being used in production? Didn't think so."

I guess I should be writing this in a Slashdot comment, but I wanted to address this here and a little more broadly.

The number one rebuttal to the above comment is simply this: IO. I have previously attempted to do a naive port of POE to Rakudo and found it very lacking. There just isn't sufficient IO support in the current implementation of both Rakudo and Parrot to build any truly sound production systems. I can't remember a single system that I've implemented that didn't involve at one point some kind of complex IO such as multiplexing over multiple sockets.

Now I fully support the whole Perl 6 and Parrot thing. I really do. And when they finally get to the point where I can start building the tools that I currently take for granted, then we can start talking about production use.

The entire Parrot and Perl 6 movement has been nothing but awesome in terms of idea generation. The method argument syntax defined in MooseX::Method::Signatures, the concept of Roles, and all sorts of great things that have trickled back into Perl 5. And I even use those ideas in their current implementation for production system use. But the source of those ideas just aren't ready.