"The DarkPAN is too big and scary to change Perl 5's defaults all even by the time of Perl 5.14" -- chromatic
DarkPAN was discussed quite a bit at YAPC::NA. But really, what is it? How is it defined? Who are these people that have a vested interested in Perl and yet do not participate? I think attempting to appease a faceless entity with no defined boundaries has fail written all over it. It is fear mongering.
If we break it, they can choose not to upgrade. What about bug fixes?, I hear you say. That is the joy of open source. There are lots of unemployed hackers out there that would be willing to backport those fixes for you. Oh heavens forbid! Actually paying open source hackers to write code! The sky is falling!
We need to stop this meme of DarkPAN. Perl does not belong to DarkPAN. It belongs to us who participate in its wellbeing. And if the amorphous corporate overlords are angered by our changes, then they need to get involved and list their grievances, just like Joe OpenSource Hacker.
But until we hear /anything/ from this unholy DarkPAN, I say we make the decisions best for our language of choice. I once had a professor that was fond of saying "Silence is tacit consent." And in this case, I would say that is extremely apropos of Perl stewardship.
Tuesday, June 30, 2009
Tuesday, June 23, 2009
Half way through
Oops! Missed my blog post for yesterday, but that is only because there has been so much awesome stuff going on here in Pittsburgh, PA at YAPC::NA::2009.
I gave my POE::Filters talk today and was a little stressed about it. I didn't finish the slides until last night, and then I guess I talked a little too fast. But it was alright in the end since no one really knew what the hell I was talking about except the POE hackers in the audience.
I gave (and will give another tomorrow) POE primer at the front of my talk which apparently was derided as not catchy enough. Shrug. It was either that, or have a bunch of people stare dumbfounded at my code examples, wondering what was happening. Sadly, I was scheduled against such heavyweights as nothingmuch and jrockway and so I had all of 20 people show up.
On the plus side, lots of hallway track time has lead to some great conversations and involvement in some good things such as hacking on perl core.
Hacking on perl core. That is pretty deep stuff. My C isn't exactly super strong, but it is really important I believe. Important enough that I want to get involved. There is a serious problem in that the core of perl is not documented. I even made mention of that for the EPO's document bounties. It got votes, just not enough to offer a bounty for it.
So I am getting involved. Our first task is to consider the tests between miniperl and perl and see where the short comings are for miniperl. A deep dive into miniperl gives us a good look into things like the tokenizer, etc.
And write about it. Blogging about it exposes the tribal knowledge to the intertubes.
And so a new line of topics begins!
I gave my POE::Filters talk today and was a little stressed about it. I didn't finish the slides until last night, and then I guess I talked a little too fast. But it was alright in the end since no one really knew what the hell I was talking about except the POE hackers in the audience.
I gave (and will give another tomorrow) POE primer at the front of my talk which apparently was derided as not catchy enough. Shrug. It was either that, or have a bunch of people stare dumbfounded at my code examples, wondering what was happening. Sadly, I was scheduled against such heavyweights as nothingmuch and jrockway and so I had all of 20 people show up.
On the plus side, lots of hallway track time has lead to some great conversations and involvement in some good things such as hacking on perl core.
Hacking on perl core. That is pretty deep stuff. My C isn't exactly super strong, but it is really important I believe. Important enough that I want to get involved. There is a serious problem in that the core of perl is not documented. I even made mention of that for the EPO's document bounties. It got votes, just not enough to offer a bounty for it.
So I am getting involved. Our first task is to consider the tests between miniperl and perl and see where the short comings are for miniperl. A deep dive into miniperl gives us a good look into things like the tokenizer, etc.
And write about it. Blogging about it exposes the tribal knowledge to the intertubes.
And so a new line of topics begins!
Monday, June 15, 2009
Getting Closer
With YAPC::NA only a handful of days away, it is crunch time for me. I had set a deadline of last night to have Voltron finished, and as typical, life got in the way. I am close though and should finish today. But the schedule I've made for myself for this week includes a 'few days' to make slides and solidify my presentations. So I have a little buffer room, but really, I don't want to be scrambling to do slides the weekend before or, gasp, during someone else's talk.
When it comes to building a complete stack for a project one of the things I love most is recognizing a missing functionality and implementing it in the proper layer. This time around, it had to do with arguments passed to callback events from TCPClient. In single connection setup, it was silly to consider context for your callbacks. You only had one connection, so you know you were operating in a particular context. But at the same time, there was nothing in the TCPClient implementation that limited things to a single connection. When I got up to the ProxySession layer, those similar behaviors followed. While there were no limitations in the code to only one connection, the API gave you no way to determine from which connection a callback was originated. And I was okay with that for the time being until I got up to the Voltron layer and realized, it was a silly limitation and imagined a single application connecting to multiple application servers. So down we went.
TCPClient was changed to allow better per connection tracking, multiple simultaneous connect calls, and some other details. Then ProxySession was changed to always return the connection id in callback events, and also enable sending along user data that would show up in the callback. Another oversight was my decorator roles. ProxyEvent was it's own thing when really it should compose Event because you can't have a proxied event without a regular event. That cleaned up some ugly syntax in my tests.
And while all of this is great, and I am making lot of progress, Voltron still isn't complete because I've been adding functionality into my lower layers. In the end though, it will make Voltron an even simpler module with a lot of the complexity encapsulated away.
After I finish Voltron today, I have to start on a simple XMLRPC server/client implementation for my POE::Filters talk. And while I want to Moose that up a bit, I will likely do it in a very non moose fashion (save for POE::Filter::XML::RPC since that was my first foray into the Moose world. Attributes rock). The again, I really really like using POEx::Role::SessionInstantiation. It makes my code so much cleaner and all of the POE bits less obvious. We'll see.
Next week, I'll be blogging from YAPC::NA and will likely see a lot of you regular Perl Iron Man readers/writers there.
When it comes to building a complete stack for a project one of the things I love most is recognizing a missing functionality and implementing it in the proper layer. This time around, it had to do with arguments passed to callback events from TCPClient. In single connection setup, it was silly to consider context for your callbacks. You only had one connection, so you know you were operating in a particular context. But at the same time, there was nothing in the TCPClient implementation that limited things to a single connection. When I got up to the ProxySession layer, those similar behaviors followed. While there were no limitations in the code to only one connection, the API gave you no way to determine from which connection a callback was originated. And I was okay with that for the time being until I got up to the Voltron layer and realized, it was a silly limitation and imagined a single application connecting to multiple application servers. So down we went.
TCPClient was changed to allow better per connection tracking, multiple simultaneous connect calls, and some other details. Then ProxySession was changed to always return the connection id in callback events, and also enable sending along user data that would show up in the callback. Another oversight was my decorator roles. ProxyEvent was it's own thing when really it should compose Event because you can't have a proxied event without a regular event. That cleaned up some ugly syntax in my tests.
And while all of this is great, and I am making lot of progress, Voltron still isn't complete because I've been adding functionality into my lower layers. In the end though, it will make Voltron an even simpler module with a lot of the complexity encapsulated away.
After I finish Voltron today, I have to start on a simple XMLRPC server/client implementation for my POE::Filters talk. And while I want to Moose that up a bit, I will likely do it in a very non moose fashion (save for POE::Filter::XML::RPC since that was my first foray into the Moose world. Attributes rock). The again, I really really like using POEx::Role::SessionInstantiation. It makes my code so much cleaner and all of the POE bits less obvious. We'll see.
Next week, I'll be blogging from YAPC::NA and will likely see a lot of you regular Perl Iron Man readers/writers there.
Monday, June 8, 2009
POEx::ProxySession is alive
So I have been putting time into my IKC-like replacement and with last nights (err, this mornings) commits, I have a working server/client setup that successfully proxies a POEx::Role::Instantiation composed class.
So like I was explaining a couple of weeks ago, I needed a simpler IKC that provided persistent local subscribed sessions that proxied posts through the server, back to the original published session. Mainly, I need Voltron to have access to these proxied sessions so that it is trivial to setup applications and to post events to them using PubSub.
Anyhow, it was an arduous task getting this thing work. There were some problems, naturally. The biggest problem was in the client. Creating a proxy session using the serialized Moose::Meta::Class was rife with issues, largely because Moose goes to great lengths to optimize it's meta internals via coderefs. And Storable+B::Deparse doesn't seem to work as advertised, especially when there are hooks introduced into the parsing process via Devel::Declare. So I couldn't count on coderefs.
So that meant after getting the deserialized Meta::Class I couldn't touch any of the methods on the object itself. But as long as I broke encapsulation on the Meta::Class (mainly via ->{methods}), I got a hold of all the meta data that I needed.
The second issue had to do with parsing method signatures and in the end it made a lot of sense once I understood what was all going on in the environment. MXD makes use of namespace::autoclean which scrubs the symbol table post intial compilation. That means that all of my imported POEx::Types were getting scrubbed away, which was a problem when I was passing method signature strings to MXMS::Meta::Method->wrap(). It couldn't find the symbols. And I chased this behavior for a while. Traced into several modules, and eventually wrote a simpler test case that would give the same result. And yes, it had to do with symbols. So I appened the "is dirty" option to the Client class and like magic things started working.
But in my symbol exploration, I discovered that POEx::Role::SessionInstantiation in during its anonymous class cloning /all/ of the symbols not native to the class were lost, which is a big problem. So that got fixed.
Beyond those problems in making several different technologies work together, the message data structure went through a couple of revisions to the point where a message identifier is now required, and result/delivery semantics correctly work. In addition, the message data structure itself is lightly checked for correctness via the subtype where clauses which means validation happens everywhere. It's pretty rockin. And, a new Role is provided to decorate methods to indicate which methods should be proxied on the subscriber side(POEx::Role::ProxyEvent).
And the one last thing that was missing was a shutdown event for POEx::Role::TCPServer and TCPClient. In the tests for those modules, I was manually clearing out the stored sockets and aliases, etc, but that wouldn't really work for the tests for PXPS. So a quick implementation and update to tests, and like magic it all works.
Unfortunately, it looks like I will be going to YAPC::NA with a vast majority of the dependencies for Voltron living only in git repos rather than on the CPAN. Which isn't so bad, but it severely limits participation from the audience when they have to basically duplicate my environment just to play with it. And it isn't an easy environment to setup considering you have to clone a number of repos, AND have Dist::Zilla installed.
For the conference, I am considering at least bundling everything up in an installation ready state and hosting the tarball. Wouldn't be a lot of work, but it adds on to everything else I still need to do. Two weeks until the conference and Voltron needs a lot of work. And, my POE::Filters talk needs a simple implementation still. I'm freaking out a little bit. Long nights/days ahead of me the next couple of weeks.
So like I was explaining a couple of weeks ago, I needed a simpler IKC that provided persistent local subscribed sessions that proxied posts through the server, back to the original published session. Mainly, I need Voltron to have access to these proxied sessions so that it is trivial to setup applications and to post events to them using PubSub.
Anyhow, it was an arduous task getting this thing work. There were some problems, naturally. The biggest problem was in the client. Creating a proxy session using the serialized Moose::Meta::Class was rife with issues, largely because Moose goes to great lengths to optimize it's meta internals via coderefs. And Storable+B::Deparse doesn't seem to work as advertised, especially when there are hooks introduced into the parsing process via Devel::Declare. So I couldn't count on coderefs.
So that meant after getting the deserialized Meta::Class I couldn't touch any of the methods on the object itself. But as long as I broke encapsulation on the Meta::Class (mainly via ->{methods}), I got a hold of all the meta data that I needed.
The second issue had to do with parsing method signatures and in the end it made a lot of sense once I understood what was all going on in the environment. MXD makes use of namespace::autoclean which scrubs the symbol table post intial compilation. That means that all of my imported POEx::Types were getting scrubbed away, which was a problem when I was passing method signature strings to MXMS::Meta::Method->wrap(). It couldn't find the symbols. And I chased this behavior for a while. Traced into several modules, and eventually wrote a simpler test case that would give the same result. And yes, it had to do with symbols. So I appened the "is dirty" option to the Client class and like magic things started working.
But in my symbol exploration, I discovered that POEx::Role::SessionInstantiation in during its anonymous class cloning /all/ of the symbols not native to the class were lost, which is a big problem. So that got fixed.
Beyond those problems in making several different technologies work together, the message data structure went through a couple of revisions to the point where a message identifier is now required, and result/delivery semantics correctly work. In addition, the message data structure itself is lightly checked for correctness via the subtype where clauses which means validation happens everywhere. It's pretty rockin. And, a new Role is provided to decorate methods to indicate which methods should be proxied on the subscriber side(POEx::Role::ProxyEvent).
And the one last thing that was missing was a shutdown event for POEx::Role::TCPServer and TCPClient. In the tests for those modules, I was manually clearing out the stored sockets and aliases, etc, but that wouldn't really work for the tests for PXPS. So a quick implementation and update to tests, and like magic it all works.
Unfortunately, it looks like I will be going to YAPC::NA with a vast majority of the dependencies for Voltron living only in git repos rather than on the CPAN. Which isn't so bad, but it severely limits participation from the audience when they have to basically duplicate my environment just to play with it. And it isn't an easy environment to setup considering you have to clone a number of repos, AND have Dist::Zilla installed.
For the conference, I am considering at least bundling everything up in an installation ready state and hosting the tarball. Wouldn't be a lot of work, but it adds on to everything else I still need to do. Two weeks until the conference and Voltron needs a lot of work. And, my POE::Filters talk needs a simple implementation still. I'm freaking out a little bit. Long nights/days ahead of me the next couple of weeks.
Monday, June 1, 2009
Closer to Magic
This past week was a light week development wise. Mostly, I am blocking waiting for my dependencies for Voltron to catch up and merge in my branches. One of the blockers was method trait arguments in MooseX::Method::Signatures. I am not really a firm believer in passing arguments to these things since it is compile time (in terms of MXMS/MXD), so you are stuck with static data. But it was deemed important enough to require implementation. So I worked with Cory Watson to figure where his implementation was falling short.
Initially, the first attempt at traits is what I merged into my local repo and made it work. It was kind of a naive approach where the stuff directly returned from the Devel::Declare context was just rolled up and applied. And the arguments were discarded. It works. The problem with the arguments is that you get them as a string back from the context. And how do you turn a string into a data structure suitable for constructor consumption?
You could eval, or you could parse. I chose parse. So I spent the weeknd and part of the week implementing a PPI based module for parsing Moose-style constructor arguments (eg. Foo->new(attribute1 => ['foo'], ...) ). And I am happy to report that Parse::Constructor::Arguments parses arbitrarily nested static declarations for arguments and returns a hash ref suitable for %{ $hashref } in a constructor.
In the mean time, Cory had taken another stab at doing traits+args. And this work really had promise, but he was stuck. And an hour of delving into it, I realized a couple of things. First, he was overwriting the caller package initialized $meta which means that the method, once properly instantiated, was not applied appropriately. And secondly, his arguments weren't being applied at all, not sure what to do with what his shadowed method was getting.
The first was easy enough to fix, just add in another lexical. The second was a case of applying my previous PPI work and also Hash::Merge to roll up all of the arguments from all of the traits so they can be all be supplied to the new_object method once.
And after a little clean up and commit amending, all tests were passing and based on the lastest Florian master.
Now I just need to hound Florian until he either merges in the branch or implements his own solution for trait arguments.
Initially, the first attempt at traits is what I merged into my local repo and made it work. It was kind of a naive approach where the stuff directly returned from the Devel::Declare context was just rolled up and applied. And the arguments were discarded. It works. The problem with the arguments is that you get them as a string back from the context. And how do you turn a string into a data structure suitable for constructor consumption?
You could eval, or you could parse. I chose parse. So I spent the weeknd and part of the week implementing a PPI based module for parsing Moose-style constructor arguments (eg. Foo->new(attribute1 => ['foo'], ...) ). And I am happy to report that Parse::Constructor::Arguments parses arbitrarily nested static declarations for arguments and returns a hash ref suitable for %{ $hashref } in a constructor.
In the mean time, Cory had taken another stab at doing traits+args. And this work really had promise, but he was stuck. And an hour of delving into it, I realized a couple of things. First, he was overwriting the caller package initialized $meta which means that the method, once properly instantiated, was not applied appropriately. And secondly, his arguments weren't being applied at all, not sure what to do with what his shadowed method was getting.
The first was easy enough to fix, just add in another lexical. The second was a case of applying my previous PPI work and also Hash::Merge to roll up all of the arguments from all of the traits so they can be all be supplied to the new_object method once.
And after a little clean up and commit amending, all tests were passing and based on the lastest Florian master.
Now I just need to hound Florian until he either merges in the branch or implements his own solution for trait arguments.
Subscribe to:
Posts (Atom)