Have a snippet of Perl you'd like to share with the group?

Click the edit link in the bottom left corner and make sure to indent your code to avoid linewrap and symbol interpretation.

http://grinder.perlmonk.org/pmsi/title.html is an autogenerated index of all snippets posted to the snippets section of LLPerlMonksLL.




  use Socket;
  use CGI::Carp qw/fatalsToBrowser/;

  sub fetch {
    my $insecurehost = shift;
    my $insecurepath = shift;
    my $insecureproto = 'http';
    my $insecureport = 80;
    local $SIG{ALRM} = sub { die "alarm expired at line ", (caller)[2]; };
    alarm 10;
    socket($sh, PF_INET, SOCK_STREAM, getprotobyname('tcp'));
    die "connect failed: $!" unless(connect($sh, sockaddr_in($insecureport, inet_aton($insecurehost))));
    select((select($sh), $|=1)[0]) or die $!;
    my $getstr = join "\012\015",
      "GET $insecurepath HTTP/1.0",
      # "Referer: http://$ENV{'HTTP_HOST'}$ENV{'PATH_INFO'}?$ENV{'QUERY_STRING'}",
      "User-Agent: Perl-Snippit/0",
      "Host: $insecurehost";
    print $sh $getstr."\012\015\012\015" or die $!;
    $_=<$sh>; die "status code $_" unless(($_ =~ m/ (.).. /)[0] == 2);
    my $buf = qq{\n};
    while(!eof $sh) { $_=<$sh>; $buf .= $_ if(/^\s*$/ .. 1); }
    close $sh;
    alarm 0;
    return $buf;
  }


This routine fetches a web page from a server using GET. Any CGI arguments must be encoded using CGI::escape() and joined together with = seperating name and value, and & seperating sets of those. Cut and paste programming like this is considered a bad practise. You should use LWP given a choice. However, there are times when you need to distribute bootstrap code or an all-in-one script that cannot depend on code that isn't in the 5.005 core distribution. Sometimes, ISPs have you locked down from installing binary modules. In this example, you might call fetch('phoenix.pm.org', 'index.html');.




  print map { ((@{$peruser->{$_}})[(0..9)]) } sort { $randuser->{$a} <=> $randuser->{$b} } keys %$peruser;

Wanted the last 10 links submitted by each user, where each user is in no perticular order. %randuser already has a list of users to a random value - I could have used a Schwartzian transform here, though, but I wasn't trying to make this complex, just didn't want a lot of intermediate values. They bother me. %peruser contains is keyed by user and contains a list of links for each user. This code, reading right to left, takes all of the keys from %peruser - %$peruser actually. Then it sorts those keys according to the random number we cooked up before for that user. Then it uses that map statement to extract the first ten entries - the last ten, actually, as were were unshifting them on. The map is kind of tricky. We're taking one thing and returning a list, which Perl flatterns into one huge list, which gets printed. That means we need parens around the whole thing in map. Next inside of that is a list:

  (@{$peruser->{$_}})

That gives us a list of everything that user has submit. This sits right next to:

  [(0..9)]

This is the same thing as [(0,1,2,3,4,5,6,7,8,9)]. It's an array slice which looks up not one but several values in an array. It is those elements that are returned back out of map after being wrapped up in a list again. Viola! The last ten URLs by any one user. Yes, the code is aweful. Don't write code like this. I hope it does illustrate that this problem isn't mindbendingly complex, though. --ScottWalters




=head2 Binary Search on Raw Text Files

Binary searches quickly find items, like a hash. Hashes and binary searches have long competed in databases and like applications. Each has strengths and weaknesses. To implement a binary search, you start repeatedly (iteratively or recursively) narrow the focus of where you're looking for something. You start in middle. If the thing you're looking for comes before the middle, you look between the beginning and the middle. If it comes after the middle, you look between the middle and the end. When you recurse or iterate, the definition of either "end" or "start" changes to be what was the middle - thus narrowing the scope.

CGI applications load code and data into memory, do something with it, and quit. The time needed to do a hash lookup or binary search is far less than the time required to read in a large file - so what if we could do binary searches without ever reading the file into memory at all?

This implementation is pointless for small files, but for large and very large files contained sorted data, only a tiny portion of the file is ever accessed.

  my $word = shift @ARGV;

  open my $fh, 'WikiWikiList';

  sub flup {
    my $start = shift;
    my $stop = shift;
    my $mid = int(($start + $stop) / 2);
    return 0 if $start == $mid or $stop == $mid;
    seek $fh, $mid, 0;
    <$fh>;
    my $line = <$fh>; chomp $line;
    print "debug: $start -> $mid -> $stop  cmp: ", ($word cmp $line), "   $word vs $line\n";
    return flup($start, $mid) if $word lt $line;
    return flup($mid, $stop) if $word gt $line;
    return 1;
  }

  if(flup(0, -s $fh)) {
    print qq{\xb7 Other Wikis: <a href="http://c2.com/cgi/wiki?$word">$word on LLWikiWikiLL

\n}; }
This isn't quite O(log N), it's actually O(2*log N) - it takes approximately twice as many searches, depending on the length of strings being lookedup, because we aren't looking up exact record boundaries, only byte boundaries. The last few tries will return the same word repeatedly, or will bounce between two words before realizing that the sought after word isn't in there. This only happens in case of failure - it runs at full speed in case of success.

This could easily be adapted to do lookups on only the key portion of key-value pairs to find the value part. Change:

  my $line = <$fh>; chomp $line;

To:

  my $line = <$fh>; chomp $line; my ($line, $value) = split / /, $line;

And change:

  return 1;

To:

  return $value;

This was written to implement http://www.c2.com/cgi/wiki?SisterSites on LLTinyWikiLL.




From LLPerlMonksLL - sorting arrays of arrays according to multiple criteria. Eg, sorted by column one then sub-sorted by column two:

  my @sorted_data = sort {
      $a->[0] <=> $b->[0]
      or
      $a->[1] cmp $b->[1]
  } @data;


See http://perlmonks.org/index.pl?node_id=246182 - I think LLDougMilesLL presented this once, too, and then forgot to put it up here =)


Submitted by J.
#Get last array element :
Last array element number :
$lastElementIndex=$#names;
or
$names[$#names] or $names[-1]

Submitted by J. # GEt list of files (*.AAA) of the directory $file_path
opendir(DIR,$file_path);
my @files = readdir(DIR);
closedir(DIR);
my $filename = ;
foreach (@files) { if ($_ =~ m|\.AAA$|) { $filename = $_; last; } } # end of foreach --------------------------------------------- Submitted by Hertz Malaga. Get POST or GET requests:
# Find if we have GET or POST request
if ($ENV{'REQUEST_METHOD'} eq "GET"){...}
## get POST data from form
my $phone_num = $Request->Form->Item('PHONE_NUM');
## get GET data from form
my $phone =%ENV{'my_phone'};
www.agendawiki.com/cgi-bin/ --------------------------------------------- Submitted by J. - # mod_perl script (uses LWP) . This is a download & replace script. It downloads a web page and allows to replace some words before you present the page to the browser.
<% use strict; require LWP::UserAgent; require HTTP::Request;
my $responseText ="";
my $site_address =qq{ http://www.yahoo.com };
# Get site page
my $responseText = askForResponse($site_address); #Do the replacements
$responseText =~ s#Yahoo#Google#ig;
# show results
$Response->write($responseText);

#sub askForResponse () - returns $responseText
sub askForResponse{ my $phpFullLink = shift(@_); my $ua = LWP::UserAgent->new; $ua->timeout(30); $ua->env_proxy; my $response = $ua->get($phpFullLink ); my $responseText =
;
 if ($response->is_success) {     $responseText = $response->content; }
 else { my $responseText="ERROR"; die $response->status_line;
 }
 return $responseText;
}
 %>





PerlMonks




L
PerlMonksL are cool.



WikiWiki




"The
WikiWikiWeb", the original Wiki software, hosted at http://c2.com/cgi/wiki/ Serves a forum for extreme programming, design patterns, and other software development concepts.



TinyWiki




Describe 'TinyWiki' here. Questo e il wiki che stavo cercando!



DougMiles






Email:

perlguy@earthlink.net
Job/Position: Phoenix.pm Ex-Head Honcho,
Webmaster/Programmer at Bowne of Phoenix
Perl Experience: 7 years
Perl Strengths: Death Rays
Hobbies/Interests: Classified.