martedì 5 novembre 2013

Making one photo album with two cameras, how to merge the photo sets in a smart way


Coming back from my New York holiday I have had to merge toghether two photo sets, taken with two cameras, into a single photo album.
I 'll provide here a quick walkthru by using the Linux OS and the jhead console command.

About the jhead command installation please use/refer your Linux distro package manager.


When merging photo sets from  two (or more) cameras, you face these problems:
  • different filename conventions for image files
  • different progressive numbers in image filenames
  • camera clocks are not synched
Let's assume you have moved the two photo sets in two folders, /frank and /ann ;
The first step is to fix the clocks delta; you have to find out two shots taken at the same time by frank and ann, let's look for something similar to:



You can use photo A and B in order to calculate the time delta between camera clocks.

Let's open these images with an image viewer and let's read the time/date EXIF informations (usually you have to look at the image "properties").

OR

you can obtain the time/date info by using the  jhead  command from shell, e.g.

>jhead IMG_2337.JPG 

File name    : IMG_2337.JPG
File size    : 3184059 bytes
File date    : 2013:10:10 16:06:04
Camera make  : Canon
Camera model : Canon DIGITAL IXUS 75
Date/Time    : 2013:10:10 16:06:05
Resolution   : 3072 x 2304
Flash used   : No (auto)
Focal length :  5.8mm  (35mm equivalent: 37mm)
CCD width    : 5.72mm
Exposure time: 0.017 s  (1/60)
Aperture     : f/2.8
Focus dist.  : 3.84m
ISO equiv.   : 160
Whitebalance : Auto
Metering Mode: matrix  

Consider this situation:
  1. Ann's photo, foto B,  Date/Time    : 2013:10:10 16:06:05
  2. Frank's photo, foto A, Date/Time    : 2013:10:10 16:16:25 

Frank's clock is 10 minutes and 20 seconds fast, you are going to fix this:

>cd frank
>jhead -ta-0:10:20 *.JPG

This command has fixed the exif time information by decreasing the photos time/date (-0:10:20). The two photo sets are now time-synced.
Jhead command provides other options to easily manage a very big clock delta (days, months or years), please refer to man pages.

The next step is photo renaming, you are going to change the photos filenames with new ones based on their EXIF time/date;
let's use jhead again:

>cd frank
>jhead -n%Y%m%d-%H%M%S-frank *.JPG
>cd ../ann
>jhead -n%Y%m%d-%H%M%S-ann *.JPG

e.g. this is going to rename the ann photo IMG_2337.JPG to 20131010-160605-ann.jpg
Please note the postfix -ann (and -frank), it helps mixing photos with the same time attribute.

Now you can simply move (or copy) the photos into the fresh new /happyholidays folder.

mv /ann/*.* /happyholydays
mv /frank/*.* /happyholydays

I hope this is useful, have fun. :-)

martedì 1 ottobre 2013

PS4 vs Xbox, proiezioni provinciali



Qualche giorno fa sono passato per un negozio di videogiochi a Portogruaro e ho chiesto ad una commessa varie info sulle console; siccome era molto disponibile ho approffitato per chiedere impressioni dietro le quinte, tipo andamento delle vendite etc.. ho scoperto cose interessanti.

  • Entrambe vendono bene
  • Le prenotazioni per la prima ondata sono quasi esaurite, ai ritardatari toccherà aspettare il prossimo anno e anche qualche mese di più
  • A Portogruaro la PS4 vende MOLTO di piu della Xbox
  • La commessa ha una teoria secondo cui le preferenze per una console hanno carattere territoriale locale (*)

(*) Mi spiego, l'idea è questa: i ragazzi tendono a comprare ciò che ha l'amico quindi se una zona è tradizionalmente pro Sony essa tenderà a restarlo, lo stesso vale per le zone Xbox;
secondo il modello proposto le situazioni di equilibrio risulterebbero instabili.

La teoria della commessa prevede che si formino delle città PS4 e delle città Xbox, Portogruaro è decisamente una roccaforte PS4.

Sarebbe interessante verificare..

domenica 3 marzo 2013

Eclipse CDT Linux Howto


UPDATE 5 March 2013: I confirm, Juno CDT has done few improvement.. but when doing a real job, restarting eclipse once every hour isn't handful.. and it is still too much slow (this is my opinion after few days of real use). My opinion is that Eclipse CDT Juno is not the best solution for working with C/C++ and being productive.

UPDATE 3 March 2013: Eclipse CDT performance looks better after the last IDE update. After 3 hours Xorg still raises cpu usage and things slow down but it is far far better.


Eclipse CDT looks a great c/c++ IDE but, with default settings, is unusable is very slow in Linux; The main issues about default configuration, are:
  • memory consumption, gc slowness
  • Xorg (linux) cpu usage, more slowness
  • very very slow
  • slowness
  • bugs which causes crashes
Using Eclipse CDT is a pain and it looks unusable (Intel Q9300@2500 Core2Quad, 8GB and SSD);I have fine-tuned the default configuration accordingly to this guide and other stackoverflow answers in order to solve these issues.

Let's fix it!

Edit eclipse.ini and raise the VM memory settings, these are mine (I have 8GB of ram so these values are a bit exaggerated, by default I am allocating 2 Gbytes for eclipse):

-Xss2m
-Xms1024m
-Xmx4096m
-XX:MaxPermSize=1024m
-XX:PermSize=1024m

Raising these values may help GC (Garbage collector). Perhaps, I have noted less ide assertions and errors.

Eclipse linux Theme support is a useless mess, GTK+-Qt is buggy and has lots of leaks which causes Xorg to eat up all your cpu, disable the GTK-Qt theme support.
Execute eclipse, go into Windows->Preferences->General->Appearance and set the Theme to Classic (If you know a better way please give me feedback) 

There is an old bug which causes Eclipse CDT to crash suddenly, add this line (vm configuration) in eclipse.ini :

-XX:-UseCompressedOops 

Tune the CDT settings in order to be less aggressive (CDT has to self parse the sources), I have adopted only few changes, let's quote form guide (I suggest to read ALL the guide, it is useful when working on big projects):
Whenever you create a new workspace for a Mozilla source tree, you should be sure to turn off the following two settings in the workspace preferences (Window > Preferences, or Eclipse > Preferences) before creating a project in that workspace:
  • in "General > Workspace", disable "Build automatically"
  • in "C/C++ > Indexer", disable "Automatically update the index"
Turning off automatic indexing prevents the CPU intensive indexer from running at various stages during the steps below before we're ready.
Warning, thanks to this change, while developing, you need sometime to trigger indexing manually (right click on the project, Index->Rebuild).

You can also configure your project properties in order to enable multicore compilation: Project Properties_> C/C++ Build -> Behavior -> Enable Parallel builds

Conclusion

The fine tuned Eclipse CDT  looks usable, there are no hangs; The IDE still slow down during the programming sessions and became very slow in 1 or 2 hour.


UPDATE: I feel like my how-to is a workaround, in fact it seems there are issues with Eclipse UI performance in Juno, let's look here

giovedì 7 febbraio 2013

gwt-java-benchmarks released

Hi,
 I've just released gwt-java-benchmarks code under GPLv2 license.


I think this is a fair quality code, not Enterprise grade ;-) but it's done and just works.
I am going to tune the sieve benchmark with high mem configuration in the future but it is better to remove at the moment those values from your csv output.
Hope useful.

P.s: I'm looking for suggestions about SVN repository structure.
At the moment the library code is simply cut&pasted in 2 sub projects.. it is the same code, so I hope there si a better way to share code between NetBeans projects; at the moment this is the most viable solution.

Have fun

EDIT: I have tuned the sieve benchmark, problem fixed.


sabato 2 febbraio 2013

GWT Benchmarks: GWT+JsVM vs JavaVM

  What is GWT?

GWT (Google Web Toolkit) is a tool for compiling Java into JavaScript code; with GWT you can write both your web application and your server side code using the Java language.

 

What about GWT performance?


I have worked with GWT in the recent years and a simple question has arisen:

"How does the same Java code perform when running inside the Java virtual machine VS running inside the browser?"

I have decided to do a bunch experiments and write this article after taking in account these facts:
  • GWT developers state that the generated JavaScript code may be better than the handwritten one
  • V8 JavaScript Engine and the other JavaScript engines have been highly improved in the recent years
It also makes sense to measure the perormance of GWT applications running on different browsers, Firefox, Opera, Chrome, Explorer in order to see who sucks and who rules.  

I think that these results may be useful from a distribuited High Perfomance Computing perspective (via web client).

 

 ..but what is performance?


I'm looking to numeric and data crunching performance; input/output, graphics and multimedia performace are out of topic.
I have measured execution time of the same code compiled with GWT, running inside the browser JavaScript VM(JsVM), and running "natively" inside the Java SE7 Virtual Machine (JavaVM).
Little execution time means high performace, it's quite simple

 

The Benchmark suite (Internals)


I have developed a little suite (EDIT: sources released) in order to run a bunch of deterministic single threaded benchmarks covering: strings crunching, numerical crunching and data manipulation; The suite performs these benchmarks after a warm up.
The warm up has been configured in order to make sure that all the benchmark code is JITted first.
To be clear, the JsVM compiles the JavaScript after a lot of execution cycles (e.g: 11000) but JavaVM  is going to compile the  bytecode to native code in few cycles.
After the warm up, the benchmark is executed lot of times and the overall execution time is recorded, it's the same approach than taking an average execution time.
The GWT and native benchmark suites share the same Java code and benchmark configurations, only the Main classes are different.
Here is the list and description of the performed benchmarks:
  • CollectionSortBenchmark, Test Java collection sort and shuffle, manipulation.
  • FFTBenchmaark, Computes FFT's of complex, double precision data, GSL code taken from scimark
  • RegexBenchMark, apply a Regex (Javascript compatible) to a text
  • SieveBenchmark, Computes prime numbers into a given numeric range
  • StringCrunchBenchmark, String and StringBuilder manipulation, append, delete  

These benchmarks have been performed twice with different configurations, one time using little data sets, and second time on a huge data sets (big arrays, big lists and so on).
This is intended to track the JsVM and JavaVM behavior when playing with memory demanding applications, yes, this looks like an Good&Old Garbage Collector stress test. ;-)
 

The Benchmark Setup


The suite has been executed on an asus K51 notebook equipped with:
  • Intel Core2Duo T6600@2.2Ghz
  • 4Gbyte RAM
  • Windows7 Home (x64)
The web benchmark suite is loaded from a glassfish AP running on another server, the last GWT 2.5 SDK has been used to compile the benchmarks into Javascript code (Obfuscated javascript).

Let's see the Results


Here you have a bar graph (Figure.1) generated with (The Great) R; the bar heights represents the benchmark execution time, normalized towards the JavaVM 1.7.0 execution time. One little bar means little execution time and high performance, please note that the y-axis has a logaritmic scale.
The experiment name (x-axis) have a number appended on it, bigger the number, bigger is the data size used: FFTBenchmark5 is less memory demanding than FFTBenchmark6.
Take a look at the graph, the first benchmark on the left is the CollectionSortBenchMark3, the red bar represents the JavaVM (ver.1.7.0_05) execution time, the yellow bar is about the Chrome browser (V8 engine) and is about 2 times slower than native javaVM; the other browsers looks about 10 times slower than java code running "native".
The blue bar refers to Microsoft Internet Explorer version 9 (MSIE).

Figure.1
Littler execution times (tiny bars) means higher performance



Last note about these results, I have had to remove a Sieve benchmark on a greater numeric range (big numeric arrays), it's my fault, I have not tuned right the number of iterations and the javaVM has completed the task in ZERO ms!! Quite strange and annoying but I have had not the time to grasp inside this.
Another note, there are few limitations on the browser about the array size, Firefox has an higher limit but the problem applies too. So, sometimes all the JsVMs suck.

Conclusions


Results in Figure.1 are quite self-explanatory.

From a GWT application standpoint Chrome rules, aka V8 rules.
A java program running inside the Chrome browser is going to perform about 2 times slower, but it's so near to JavaVM, so near.
Another interesting point is Chrome regular expression execution time: Java Regex bechmarks performs far better in Chrome than in JavaVM; You know, the GWT Regex classes are backed directly by low level JsVM functions and they look terrific.
Kudos to Chrome regex implementation.
The Java GWT performance in Opera,Firefox and MSIE sucks a bit, running the code on those browsers is 5 or 10 times slower than running on JavaVM.
Firefox sucks a lot with numeric workloads or crunching big arrays of numeric data, running 40 (or 50) times slower than the JavaVM .
I hardly suspect that the GWT developers devote themselves to Chrome optimization, and for this reason GWT compilation for the other browsers sucks. I have no clue, do you have? :-)

From an HPC standpoint, GWT apps on Chrome performs very well compared to native Java Virtual Machine apps, so, GWT looks like a good tool for writing distributed High Performance Computing web clients.

PS:
Comments and criticisms are welcome