Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

The Monastery Gates

( #131=superdoc: print w/replies, xml ) Need Help??

Donations gladly accepted

If you're new here please read PerlMonks FAQ
and Create a new user.

New Questions
Uninitialized value from hostname()
2 direct replies — Read more / Contribute
by eepstein
on Oct 11, 2016 at 18:37

    Using Perl v5.20.2 on OpenBSD 5.8.

    I've written a management script called by a cron job that pulls values from a database. The ID of each server is represented by the hostname. During periods of network outages, or poor performance otherwise, I've receive the following warning via the cron job:

    Use of uninitialized value $host in concatenation (.) or string at /etc/mrtg/testing.pl line 128.

    Along with this warning I receive an error from DBI which indicates it couldn't make a connection. That is what leads me to believe it may be related to network errors and performance.

    The cron job itself fires off twice a minute with two separate sets of parameters, so it is succeeding >90% of the time. The code itself is fairly simple, and writes status and event logs to /var/log/daemon. When line 128 is being run the daemon log is being updated by my management script. I would post those too, but my newsyslog is a bit aggressive (sorry). The log is being updated normally as we speak, with no errors.

    # Perl libraries we need - Please note that Date::Parse is not # standard and needs to be installed as of OpenBSD 5.7 use utf8; use Path::Class; use strict; use warnings; use Time::Piece; use Date::Parse; use Socket; use Sys::Hostname; use POSIX qw(strftime); use DBI; use IO::Socket::INET; use Storable; use Storable qw(nstore store_fd nstore_fd freeze thaw dclone); use IPC::Open2; use Symbol 'gensym'; use String::Util 'trim'; use Net::SNMP; use Data::Dumper; # Get our hostname for C2 operations our $host = hostname(); # Get our local IP address by connecting to the MRTG Database our $localip = ''; sub get_local_ip(){ my $sock = IO::Socket::INET->new( PeerAddr=> $mrtgc2, PeerPort=> 5432, Proto => "tcp"); our $localip = ''; $::localip = $sock->sockhost; } get_local_ip(); ...... ...... ...... ## The following line is #128 from the above error ## print $fd $daemon_log_timestamp.' '.our $host.' mrtg[]:['.$log_entry_c +ategory.']: '.$log_entry.'.'."\n";
    Anybody have an idea as to why the value would be uninitialized? Sys::Hostname uses many methods to get the hostname, so I'm surprised at the error, and stumped about the intermittent nature of it.
perlembed gives out Segmentation fault
No replies — Read more | Post response
by tusooa
on Oct 11, 2016 at 11:16

    Hi there,

    I need to somehow embed a Perl interpreter in a shared library, and call it dynamicly from another application. My idea is to create the interpreter in the constructor function, use it to call some functions when needed, and finally free it in the destructor function.

    To test the library I wrote a one-liner using Win32::API of Perl to load it. The loading is no problem, but when it comes to FreeLibrary it gives out the message of `Perl interpreter has stopped working'. What is confusing me however, is that according to the messages printed in the terminal, the destructor function seems to be correctly executed. Then, when I use gdb it shows that there is a SIGSEGV, claiming the problem is in ntdll!RtlWaitOnAddress () from C:\WINDOWS\SYSTEM32\ntdll.dll.

    I also wrote a C program invoking the shared library, which seem to have no problem loading and freeing.

    I am using a 32-bit Strawberry Perl 5.24.0 on a 64-bit Windows 10.

    Could anyone help with this problem please? Lots of thanks and sorry for my poor English.

    Best regards,

    The perlembed code is as follows:

    #include <EXTERN.h> #include <perl.h> #include <windows.h> #include <stdio.h> static PerlInterpreter *my_perl = NULL; static __attribute__((constructor)) void init() { int num = 2; char *args[] = { "", "init.perl" }; PERL_SYS_INIT3((int *)NULL,(char ***)NULL,(char ***)NULL); my_perl = perl_alloc(); perl_construct(my_perl); PL_exit_flags |= PERL_EXIT_DESTRUCT_END; perl_parse(my_perl, NULL, num, args, (char **)NULL); perl_run(my_perl); } static __attribute__((destructor)) void quit() { if (my_perl) { PL_perl_destruct_level = 1; perl_destruct(my_perl);printf("1st\n"); perl_free(my_perl);printf("2nd\n"); PERL_SYS_TERM();printf("3rd\n"); } } // the following are functions to invoke in perl script extern __declspec(dllexport) void about() { dSP; SV *err_tmp; ENTER; SAVETMPS; PUSHMARK(SP); PUTBACK; call_pv("about", G_VOID|G_DISCARD|G_EVAL|G_NOARGS); SPAGAIN; err_tmp = ERRSV; if (SvTRUE(err_tmp)) { printf ("[error about]%s\n", SvPV_nolen(err_tmp)); POPs; } PUTBACK; FREETMPS; LEAVE; }

    The perl script invoked is as follows, named init.perl:

    #!/usr/bin/env perl use 5.012; say 'aaa'; sub about { say "bbb"; }

    The output of GDB is:

    (gdb) set args -e "use Win32;my $c = Win32::LoadLibrary('interp.dll'); +Win32::FreeLibrary($c);" (gdb) r Starting program: C:\Home\Programs\perl5\perl\bin\perl.exe -e "use Win +32;my $c = Win32::LoadLibrary('interp.dll');Win32: :FreeLibrary($c);" [New Thread 5764.0x10f4] warning: FTH: (5764): *** Fault tolerant heap shim applied to current +process. This is usually due to previous crashes. *** [New Thread 5764.0x1bc4] [New Thread 5764.0x1698] [New Thread 5764.0xc68] aaa 1st 2nd 3rd Program received signal SIGSEGV, Segmentation fault. 0x77616d39 in ntdll!RtlWaitOnAddress () from C:\WINDOWS\SYSTEM32\ntdll +.dll (gdb) info all-registers eax 0x0 0 ecx 0x6de7bf34 1843904308 edx 0xffffffff -1 ebx 0xffffff00 -256 esp 0x60fd20 0x60fd20 ebp 0x60fd4c 0x60fd4c esi 0x6de7bf24 1843904292 edi 0x0 0 eip 0x77616d39 0x77616d39 <ntdll!RtlWaitOnAddress+153 +> eflags 0x10213 [ CF AF IF RF ] cs 0x23 35 ss 0x2b 43 ds 0x2b 43 es 0x2b 43 fs 0x53 83 gs 0x2b 43 st0 0 (raw 0x00000000000000000000) st1 0 (raw 0x00000000000000000000) st2 0 (raw 0x00000000000000000000) st3 0 (raw 0x00000000000000000000) st4 12 (raw 0x4002c000000000000000) st5 1000 (raw 0x4008fa00000000000000) st6 5.0110000000000001207922650792170316 (raw 0x4001a05 +a1cac08312800) st7 1 (raw 0x3fff8000000000000000) fctrl 0x37f 895 fstat 0x420 1056 ftag 0xffff 65535 fiseg 0x0 0 fioff 0x6dcc9c40 1842125888 ---Type <return> to continue, or q <return> to quit--- foseg 0x0 0 fooff 0x60f188 6353288 fop 0x0 0 xmm0 {v4_float = {0x0, 0x0, 0x0, 0x0}, v2_double = {0x0, 0x0 +}, v16_int8 = {0x0 <repeats 16 times>}, v8_int16 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, v4_int32 = {0x0 +, 0x0, 0x0, 0x0}, v2_int64 = {0x0, 0x0}, uint128 = 0x00000000000000000000000000000000} xmm1 {v4_float = {0x0, 0x0, 0x0, 0x0}, v2_double = {0x0, 0x0 +}, v16_int8 = {0x0 <repeats 16 times>}, v8_int16 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, v4_int32 = {0x0 +, 0x0, 0x0, 0x0}, v2_int64 = {0x0, 0x0}, uint128 = 0x00000000000000000000000000000000} xmm2 {v4_float = {0x0, 0x0, 0x0, 0x0}, v2_double = {0x0, 0x0 +}, v16_int8 = {0x0 <repeats 16 times>}, v8_int16 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, v4_int32 = {0x0 +, 0x0, 0x0, 0x0}, v2_int64 = {0x0, 0x0}, uint128 = 0x00000000000000000000000000000000} xmm3 {v4_float = {0x0, 0x0, 0x0, 0x0}, v2_double = {0x0, 0x0 +}, v16_int8 = {0x0 <repeats 16 times>}, v8_int16 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, v4_int32 = {0x0 +, 0x0, 0x0, 0x0}, v2_int64 = {0x0, 0x0}, uint128 = 0x00000000000000000000000000000000} xmm4 {v4_float = {0x0, 0x0, 0x0, 0x0}, v2_double = {0x0, 0x0 +}, v16_int8 = {0x0 <repeats 16 times>}, v8_int16 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, v4_int32 = {0x0 +, 0x0, 0x0, 0x0}, v2_int64 = {0x0, 0x0}, uint128 = 0x00000000000000000000000000000000} xmm5 {v4_float = {0x0, 0x0, 0x0, 0x0}, v2_double = {0x0, 0x0 +}, v16_int8 = {0x0 <repeats 16 times>}, v8_int16 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}, v4_int32 = {0x0 +, 0x0, 0x0, 0x0}, v2_int64 = {0x0, 0x0}, uint128 = 0x00000000000000000000000000000000} xmm6 {v4_float = {0xbb, 0x0, 0x371, 0x0}, v2_double = {0x800 +0000000000000, 0x8000000000000000}, v16_int8 = { 0x69, 0x6e, 0x3b, 0x43, 0x3a, 0x5c, 0x48, 0x6f, 0x6d, 0x65, 0x5c, +0x44, 0x6f, 0x63, 0x75, 0x6d}, v8_int16 = { 0x6e69, 0x433b, 0x5c3a, 0x6f48, 0x656d, 0x445c, 0x636f, 0x6d75}, v +4_int32 = {0x433b6e69, 0x6f485c3a, 0x445c656d, 0x6d75636f}, v2_int64 = {0x6f485c3a433b6e69, 0x6d75636f445c656d}, +uint128 = 0x6d75636f445c656d6f485c3a433b6e69} xmm7 {v4_float = {0x0, 0x0, 0x0, 0x0}, v2_double = {0x800000 +0000000000, 0x8000000000000000}, v16_int8 = { 0x65, 0x6e, 0x74, 0x73, 0x5c, 0x47, 0x69, 0x74, 0x48, 0x75, 0x62, +0x5c, 0x73, 0x63, 0x72, 0x69}, v8_int16 = { 0x6e65, 0x7374, 0x475c, 0x7469, 0x7548, 0x5c62, 0x6373, 0x6972}, v +4_int32 = {0x73746e65, 0x7469475c, 0x5c627548, 0x69726373}, v2_int64 = {0x7469475c73746e65, 0x697263735c627548}, +uint128 = 0x697263735c6275487469475c73746e65} ---Type <return> to continue, or q <return> to quit--- mxcsr 0x1f80 [ IM DM ZM OM UM PM ] mm0 {uint64 = 0x0, v2_int32 = {0x0, 0x0}, v4_int16 = {0x0, +0x0, 0x0, 0x0}, v8_int8 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}} mm1 {uint64 = 0x0, v2_int32 = {0x0, 0x0}, v4_int16 = {0x0, +0x0, 0x0, 0x0}, v8_int8 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}} mm2 {uint64 = 0x0, v2_int32 = {0x0, 0x0}, v4_int16 = {0x0, +0x0, 0x0, 0x0}, v8_int8 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}} mm3 {uint64 = 0x0, v2_int32 = {0x0, 0x0}, v4_int16 = {0x0, +0x0, 0x0, 0x0}, v8_int8 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0}} mm4 {uint64 = 0xc000000000000000, v2_int32 = {0x0, 0xc00000 +00}, v4_int16 = {0x0, 0x0, 0x0, 0xc000}, v8_int8 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc0}} mm5 {uint64 = 0xfa00000000000000, v2_int32 = {0x0, 0xfa0000 +00}, v4_int16 = {0x0, 0x0, 0x0, 0xfa00}, v8_int8 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xfa}} mm6 {uint64 = 0xa05a1cac08312800, v2_int32 = {0x8312800, 0x +a05a1cac}, v4_int16 = {0x2800, 0x831, 0x1cac, 0xa05a}, v8_int8 = {0x0, 0x28, 0x31, 0x8, 0xac, 0x1c, 0x5a, 0xa0}} mm7 {uint64 = 0x8000000000000000, v2_int32 = {0x0, 0x800000 +00}, v4_int16 = {0x0, 0x0, 0x0, 0x8000}, v8_int8 = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x80}} (gdb)
How bad is $SIG{__DIE__} really?
4 direct replies — Read more / Contribute
by haukex
on Oct 11, 2016 at 05:59

    Fellow Monks,

    The thread Getting the Behavior of the "file open or die" Pragma but with a Program Pause got me thinking about this. The documentation of $SIG{__DIE__} says this:

    Having to even think about the $^S variable in your exception handlers is simply wrong. $SIG{__DIE__} as currently implemented invites grievous and difficult to track down errors. Avoid it and use an END{} or CORE::GLOBAL::die override instead.

    That exact paragraph has been present in perlvar since 1999, and although it sounds like a pretty dire warning, currently it's at the bottom of that section of documentation. I've seen $SIG{__DIE__} used in lots of places, and I've used it myself a few times, mostly to rewrite the die message and add additional information, and sometimes to send error messages to syslog as well. Although the code seems to work fine, now I'm wondering about that. Is $SIG{__DIE__} really as bad as the documentation says and should I be avoiding it for these uses? As always, your wisdom would be appreciated.

    Update: Some examples of how I've used $SIG{__DIE__}:

    local $SIG{__DIE__} = sub { die "[".gmtime." UTC] (PID $$) FATAL ".shi +ft }; # - OR - # for my $i (0..10) { local $SIG{__DIE__} = sub { my $e = shift; $e=~s/\.?\n?$//; die "$ +e (item $i)\n" }; } # - OR - # local $SIG{__DIE__} = sub { chomp(my $m = shift); syslog('err','Error: + %s',$m) };

    What say you, have I sinned?

    P.S. I should say that I am aware of the issues of how $SIG{__DIE__} interacts with eval and that specifically hasn't been a problem for me so far. I'm asking mostly about the other, more general, warning in the documentation I quoted above.

    Thanks,
    -- Hauke D

[SOLVED]: DBD::SQLite "file is encrypted or is not a database" in running application
4 direct replies — Read more / Contribute
by stevieb
on Oct 09, 2016 at 16:07

    update: davido pointed me to the documentation that states that separate processes sharing a single DB handle will almost certainly cause issues. I changed my code so that each event (process) uses its own DB handle, which all point to the same SQLite database. After that, I've got 19+ hours of successful runtime, so it seems to be resolved./update

    I've got a long-running web app with Dancer2 that has asynchronous processes doing DB work outside of the main process.

    I've been trying to sort out why the event crashes, and after dumping the web access requests to /dev/null, adding some debug printing in the C code that reads a sensor and updates a DB, I finally got what I was looking for. In the below error, it's reading the sensor until it gets a valid value for temp and humidity, and then the C code passes back the values to Perl. Perl then (still inside of an async event), writes these values to the DB.

    However, all I can find on the error "file is encrypted..." is related to version mis-matches etc. I can't see that being the case here, because my app was running for 2.5+ hours, then broke suddenly. The web app is still running, but this event is crashed, so no more updates.

    Could this be a clash of file access or something? Anyone seen this before?

    DHT11 exec temp temp data: -1 temp data: 21 temp: 21 DHT11 exec humidity humidity data: -1 humidity data: -1 humidity data: -1 humidity data: -1 humidity data: -1 humidity data: 20 humidity: 20 DBD::SQLite::st execute failed: file is encrypted or is not a database + at /home/pi/repos/app-envui/bin/../lib/App/RPi/EnvUI/DB.pm line 66. DBD::SQLite::st execute failed: database disk image is malformed at /h +ome/pi/repos/app-envui/bin/../lib/App/RPi/EnvUI/DB.pm line 95. Error while loading /home/pi/repos/app-envui/bin/app.pl: DBD::SQLite:: +st execute failed: database disk image is malformed at /home/pi/repos +/app-envui/bin/../lib/App/RPi/EnvUI/DB.pm line 95. Compilation failed in require at /home/pi/repos/app-envui/bin/app.pl l +ine 8. BEGIN failed--compilation aborted at /home/pi/repos/app-envui/bin/app. +pl line 8. [App::RPi::EnvUI:7104] error @2016-10-09 13:39:58> Route exception: DB +D::SQLite::st execute failed: file is encrypted or is not a database +at /home/pi/repos/app-envui/bin/../lib/App/RPi/EnvUI/DB.pm line 66. i +n /home/pi/perl5/perlbrew/perls/perl-5.22.2/lib/site_perl/5.22.2/Danc +er2/Core/App.pm l. 1444 (in cleanup) Can't kill a non-numeric process ID at /home/pi/perl5 +/perlbrew/perls/perl-5.22.2/lib/site_perl/5.22.2/Async/Event/Interval +.pm line 29 during global destruction.

    The last line of the error I understand; that's due to the separate proc crashing without waiting for the child proc, I just haven't tidied that up yet.

    The crash is occurring on an open DB handle during a basic select statement (at least in this case it was a select).

Perl Deep Recursion locks up modern linuxes?
2 direct replies — Read more / Contribute
by vsespb
on Oct 09, 2016 at 12:47

    Recently I found that if I run:

    perl -e 'sub x { x() }; x()'
    on modern linux box, whole system locks - keyboard, mouse does not work.

    I asked several people to reproduce the problem. Also tested on virtual machine.

    So, Ubuntu 12.04 - not affected at all. Can terminate script by Ctrl-C.

    Ubuntu 16.04 - locks up. Only chance to quit is to power off the machine (or wait till it fills all memory; or maybe REISUB - didn't try this ). I reported this a bug to Ubuntu team. They confirmed that 16.04 indeed locks up.

    Debian (7 or 8, dont remember) is less responsive than Ubuntu 12.04, but still can quit.

    One person told that Fedora (I suspect modern) locks up too.

    MacOSX not affected.

    Ubuntu team qualified this bug report as perl problem (well, that does not necessary means that it will be considered as valid bug), I was going to object, but interesting thing, indeed, not every programming language can cause such trouble. I tried Ruby an C, they use C-stack thus simply run out of stack very fast and nothing bad happens.

    Question: do you have any thoughts why modern OSes are not friendly to persons running buggy perl code (perhaps it's a developer, who debugs the code)? Maybe perl uses some memory allocation which is not "compatible" with modern kernels? Maybe linux kernel changed some defaults, related to process priorities? How linux can be tuned to avoid this (tried nice and ionice - does not help)?

    Solution to install SIG WARN and terminate on "Deep recursion" is understood.
Moose Trait to Add a Method to an Attribute
1 direct reply — Read more / Contribute
by choroba
on Oct 07, 2016 at 16:36
    Dear brethren and sistren!

    Imagine a Moose class with an array attribute.

    package My; use Moose; has list => ( is => 'rw', isa => 'ArrayRef[Str]', );

    If we need to change all the elements of the attribute to uppercase, the simplest solution would be to use map directly:

    my $obj = 'My'->new(list => [qw[ a b c ]]); $obj->list([ map uc, @{ $obj->list } ]);

    Now, suppose our attribute is more complex. We'll implement it with the Array trait and its map method:

    package My; use Moose; has list => ( is => 'rw', isa => 'ArrayRef[Str]', traits => [qw[ Array ]], handles => { add_to_list => 'push', list_elements => 'elements', map_list => 'map' }, );

    Which changes the code to

    $obj->list([ $obj->map_list(sub { uc }) ]);

    We can still imagine a simpler interface: the apply_to_list method that takes a code reference and applies it to each element of the list.

    I was able to implement it as another trait:

    #!/usr/bin/perl use warnings; use strict; use feature qw{ say }; { package Array::Apply; use Moose::Role; before install_accessors => sub { my $class = shift; my $name = $class->name; my $method = "apply_to_$name"; $class->associated_class->add_method( $method => sub { my ($self, $code) = @_; $_ = $code->($_) for @{ $self->$name }; } ); }; } { package My; use Moose; has list => ( is => 'rw', isa => 'ArrayRef[Str]', default => sub { [] }, traits => [qw[ Array Array::Apply ]], handles => { add_to_list => 'push', list_elements => 'elements', }, ); __PACKAGE__->meta->make_immutable; } my $obj = My->new; $obj->add_to_list(qw( a b c )); $obj->apply_to_list(sub { uc }); say for $obj->list_elements;

    But that's not exactly what we imagined. Is it possible to create a trait (or modify the Array trait) in a way that it adds a new apply method to the attribute which we can use in delegation, i.e. something like

    handles => { add_to_list => 'push', list_elements => 'elements', apply_to_list => 'apply', },

    I've been trying for several hours, but I just don't know how to do it. Do you?

    ($q=q:Sq=~/;[c](.)(.)/;chr(-||-|5+lengthSq)`"S|oS2"`map{chr |+ord }map{substrSq`S_+|`|}3E|-|`7**2-3:)=~y+S|`+$1,++print+eval$q,q,a,
detached threads still warn
2 direct replies — Read more / Contribute
by pryrt
on Oct 07, 2016 at 11:36

    So, I started playing around with threads, using Re: buffering zipped pipes as an example. I got something working on my Windows machine, then tried it out on the ancient linux box (perl 5.8.5)... and got the dreaded "A thread exited while 2 threads were running."

    I double-checked to make sure that the linux box's perldoc threads claimed support for ->detach, and the WARNINGS section suggested joining to remove the error. I thought ->detach was supposed to make it automatically clean up such stuff.

    I tried no warnings 'threads'; or even no warnings;, just to see if I could make the warning go away, but it stuck around.

    Given the details (below), how can I get around this warning? Can I install a new local version of threads on the linux machine (I have cpan set up for local library installs), or is threads too core to be able to update with cpan? Or should I just switch to not using ->detach, and do manual cleanup using ->join?

    tl;dr details follow...

SVN::Client -- deleting created working copy results in 'cannot remove directory'
1 direct reply — Read more / Contribute
by tj_thompson
on Oct 06, 2016 at 14:21
    Hello monks,

    I'm writing some code around SVN::Client. During testing, I'd like to be able to create and delete working copies. However, I find that once I've created a working copy I am unable to cleanly delete that working copy without ending the process that created the working copy.

    Here's some code that produces the warning:

    use strict; use warnings; use SVN::Client; use File::Path qw(rmtree); my $repo = 'file:///nfs/pdx/disks/mdo_cttr_prod/dev/tmp/repo_test/repo +1'; my $path = '/nfs/pdx/disks/mdo_cttr_prod/dev/tmp/repo_test/wc1'; my $client = new SVN::Client( auth => [ SVN::Client::get_username_provider() ], ); # checkout a working copy my $rev = $client->checkout( $repo, $path, 'HEAD', 0 ); # eliminate the client $client = undef; # attempt to remove wc # this produces the warning for being unable to remove directory rmtree $path;

    This is on Linux and this is the generated warnings:

    cannot remove directory for /nfs/pdx/disks/mdo_cttr_prod/dev/tmp/repo_ +test/wc1/.svn: Directory not empty at tmp.pl line 21. cannot remove directory for /nfs/pdx/disks/mdo_cttr_prod/dev/tmp/repo_ +test/wc1: Directory not empty at tmp.pl line 21.

    The problem seems to be due to this .nfs file:

    plxcf4060> ls -alrt wc1/.svn total 132 -rw-r----- 1 rptrdev rptrusrs 122880 Oct 6 10:54 .nfs000000000711c5da +000000c9 drwxr-s--- 2 rptrdev rptrusrs 4096 Oct 6 10:54 . drwxr-s--- 3 rptrdev rptrusrs 4096 Oct 6 10:54 ..

    Despite having undefined the client, this lock file seems to be left behind and I'm uncertain as to how to release it.

    I have this comment from some old code I wrote that seems to summarize the issue. I have been unfortunately unable to dig up my source on this comment to research further (and I can't even be certain as to the accuracy of the information contained in comment)

    # Currently, the below code is unable to release the sqlite # database file used to store data because SVN::Client module has # no perl code for releasing the lock it aquires on the .svn # working copy db.wc file. In order to accomplish this, it seems # that svn_wc_context_destroy must be called. This seems to be # built into the XS code written for the perl bindings, but there # does not seem to be a perl binding written to utilize it. Until # this is resolved, the only way to release the lock on the # database file is to terminate the process that acquired it.

    Any ideas on how to release this working copy properly so it can be deleted between tests?

Please critique this script -- Read emails and write URLs into bookmarks
3 direct replies — Read more / Contribute
by Anonymous Monk
on Oct 06, 2016 at 14:03

    This is my first Perl script. Perl is not my first programming language, though.

    This script is cobbled together from various sources. I was using Perlcritic and Perltidy to help writing idiomatic code. You guys did a great job with those tools.

    #!/usr/bin/perl # read emails I sent myself containing just a link # and make a HTML-redirect based bookmark out of them # # SPEC # read given files as internet messages # use subject as title and filename, escape as needed # use one URL in body as href (if there a two they are both the same) # and output one HTML file for each bookmark use strict; use warnings; use Encode; use Path::Tiny; use Email::Simple; use HTML::Entities; use URI::Find; use version; our $VERSION = qv('0.0.1'); foreach my $in_filepath (@ARGV) { my $in_file = Path::Tiny->new($in_filepath)->slurp; my $message = Email::Simple->new($in_file); my $subject = decode( 'MIME-Header', $message->header('Subjec +t') ); my $title = HTML::Entities::encode($subject); my $out_filepart = $subject =~ s/[%\/\?<>\\:\*\|":]/_/gr; my @uris; # TODO URL::Search might be nicer than URI::Find for some cases # TODO $finder = URI::Find->new(sub { push @uris, $_[0] }); my $finder = URI::Find->new( sub { my $uri = shift; push @uris, $uri; }, ); $finder->find( \$message->body ); path("$out_filepart.URL.HTML") ->spew( "<!DOCTYPE html><title>$title</title><meta http-equiv=\" +refresh\" content=\"0; url=$uris[0]\">" ); }

    The script does what it is supposed to do. I'm looking for any kind of feedback, especially pitfalls and 'better' idioms etc. I already got some feedback I put in as 'TODO'

    Some musings about the choices I made:

    • I was explicitly using namespaces to make very clear where things are from. I didn't do that for core modules.
    • I would have liked to concatenate the `$out_filepath` with the file extension directly at the initialization but I didn't know how to do that with the regex.
    • I did not use qq() primarily because it's not a very memorable name and does not appear in any other language I use. That's also the reason I didn't use any other shortcuts, particularly punctuation variables.
    • I'm totally not sure if there isn't still any encoding nonsense possible (filenames, file contents etc)

    Overall I'm okay with how it turned out for the first one. It's not that big of a script after all anyways. So: critique away :)

Moose subtypes
1 direct reply — Read more / Contribute
by kdjantzen
on Oct 06, 2016 at 13:37
    Hello,

    in a Moose::Role I defined two subtypes

    subtype 'OptInt' => as 'Int'; subtype 'OptStr' => as 'Str';
    With
    use Types::Standard qw(Maybe Str Int OptInt OptStr);
    I want to specify an attribute of a class
    has 'xxx' => (is => 'rw', isa => (Maybe[OptInt])->plus_coercions(OptInt, sub{ + undef }));
    which, understandably, results in the error message
    Could not find sub 'OptInt' to export in package 'Types::Standard' at
    How should I specify my subtypes in order to make the above work?

    Thanks for any help.

    K.D.J.

Session id generation with Perl once more
3 direct replies — Read more / Contribute
by Dallaylaen
on Oct 06, 2016 at 11:38

    So here we are again. After examining several posts about session generation, including Dancer's, Catalyst's and PHP's, I decided to roll my own.

    As I understood, there are generally 3 requirements:

    • unique;
    • hard to guess;
    • reasonably fast.

    Now the following code snippet, as I believe, satisfies these criteria:

    =head2 get_session_id( [$user_salt] ) Generate a new, shiny, unique, unpredictable session id. Id is base64-encoded. The default is using two rounds of md5 with time, process id, hostname +, and random salt. Should be unique and reasonably hard to guess. If argument is given, it's also added to the mix. Set $MVC::Neaf::X::Session::Hash to other function (e.g. Digest::SHA:: +sha224) if md5 is not secure enough. Set $MVC::Neaf::X::Session::Host to something unique if you know bette +r. Default is hostname. Set $MVC::Neaf::X::Session::Truncate to the desired length (e.g. if length constraint in database). Default (0) means return however many chars are generated by hash+base +64. =cut use Digest::MD5; use Time::HiRes qw(gettimeofday); use Sys::Hostname qw(hostname); use MIME::Base64 qw(encode_base64); # Premature optimisation at its best. # Should be more or less secure and unique though. my $max = 2*1024*1024*1024; my $count = 0; my $old_rand = 0; my $old_mix = ''; our $Host = hostname() || ''; our $Hash = \&Digest::MD5::md5_base64; our $Truncate; sub get_session_id { my ($self, $salt) = @_; $count = $max unless $count--; my $rand = int ( rand() * $max ); my ($time, $ms) = gettimeofday(); $salt = '' unless defined $salt; # using old entropy means attacker will have to guess ALL previous + sessions $old_mix = $Hash->(pack "LaaaaLLLLaL" , $rand, $old_mix, "#" , $Host, '#', $$, $time, $ms, $count , $salt, $old_rand); # salt before second round of hashing # public data (session_id) should NOT be used for generation $old_rand = int (rand() * $max ); my $ret = encode_base64( $Hash->( pack "aL", $old_mix, $old_rand ) + ); $ret = substr( $ret, 0, $Truncate ) if $Truncate and $Truncate < length $ret; return $ret; }; # finally, bootstrap the session generator at startap get_session_id();
    • Hostname, pid, time (with milliseconds), and counter make it unique.
    • Two random numbers, extra round of hashing, and dependence on previous generations make it hard to guess.
    • As for speed, it was able to churn out around 48k sessions/second on a 1GHz laptop.

    But I may well be missing something. Am I?

Optimizing TCP packet usage with IO::Socket::INET or similar
2 direct replies — Read more / Contribute
by Bloehdian
on Oct 06, 2016 at 08:55

    Hello Monks,

    this is a design/problem approach question, therefor no code up to now:

    I want to setup a client-server scenario with IO::Socket::INET (or a similar module) to transfer (textual) data to a remote tcp-port using send() method.

    The data to be sent is received continuously line by line from a different TCP-port and it should be sent continuously to the target socket as well.

    To make transfer as efficient as possible in terms of overhead/payload ratio it is necessary to stuff as much data into tcp packets as possible, having at least padding as possible.

    I.e., if MTU is 1500 and assuming TCP-overhead to be 64 bytes, the payload in each packet should be 1436 bytes for each packet (ideally).

    Will my original idea

    $socket->send( $data );

    with $data comprising exactly 1436 bytes work or am I wrong assuming that each send() will put $data into no more than one tcp-packet?

    And: as stated, the data to be transmitted comes in line by line as a contiuous stream. It is expected that each line has significantly less bytes than the MTU, but this is not guaranteed. Accumulating enough data lines to fill a tcp-packet and chopping the last line if it would exceed the maximum payload is not that straightforward.

    Therefor:

    Is there a way to do this automatically/implicitly (using a Perl module)?

    Related:

    I am thinking about further optimization of the data transfer in terms of low bandwith usage by deploying Net::EasyTCP with compression. Am I right with the assumption that the receiving application on the remote side has to apply the same compression modules, e.g., Compress::Zlib, to uncompress the data or is this somewhere done by the TCP-client?

    Cheers

    Bloehdian

New Monk Discussion
Data lost from private scratchpad
4 direct replies — Read more / Contribute
by Athanasius
on Oct 07, 2016 at 22:38

    For most of my time at the Monastery (nearly 4½ years now) I’ve been slowly accumulating information in my private scratchpad. Today I discovered that the majority of that data is gone — apparently just truncated below a certain point. I assume that means I’ve exceeded some pre-set limit? If so, I have three further questions:

    1. Is there any way I can recover the data?
    2. Is the limit local to the private scratchpad, or global across my home node, public scratchpad, and messages?
    3. How can I monitor my data usage in the future to prevent this from re-occurring?

    Thanks,

    Athanasius <°(((><contra mundum Iustus alius egestas vitae, eros Piratica,

Log In?
Username:
Password:

What's my password?
Create A New User
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others perusing the Monastery: (12)
As of 2016-10-12 15:53 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    How many different varieties (color, size, etc) of socks do you have in your sock drawer?






    Results (196 votes). Check out past polls.