We design methods and tools for concurrent, parallel and distributed programming. We mostly focus on C, C ++11 and Java, but we are open to (almost) all the experiences. We develop applications of a certain complexity, not barely micro-samples or test units. We are guided by the problems. We do it to understand, to abstract and encode methodological processes and best practices that are really helpful to those developing complex software with strong quality requirements: speed, robustness, portability, portability, performance, etc.
Recently, we are quite focused on high-performance data analytics, and in particular to programming tools and supports at run-time necessary for the execution of parallel algorithms on the data stream.
The tools and applications we develop work on Intel x86_64, ARM, IBM power, Linux, Windows, Mac OS. We do not employed much effort and time to get them to work anywhere, we invested some time to design development environments to design very portable applications. Using proper abstractions is really possible to move much of the development work more delicate by end users to developers of development tools. The advantage in terms of innovation is immediate: reduced time-to-market, cost of maintenance, performance tuning effort.
The lietmotif of our group are data-centric parallel programming models. Every language, library, or instrument that we produce is therefore is the realization of a model. You experimentation. Over the years we have experienced a lot of courses. Some of these experiments have been quite successful in the open source community and the software industry. Currently, in terms of software development, our flagship is the programming environment FastFlow.
FastFlow is a C ++ header-only library realising a parallel programming environment based on patterns. FastFlow supports multi-core platforms, GPU, FPGA and distributed, and promotes a data-centric programming model. In FastFlow, the programmer writes code using patterns, such as Pipeline, Farm, Map, Reduce, Stencil, MapReduce, StencilReduce, etc. Patterns are defined on data streams and are implemented by means of a model to actors. Applications written with FastFlow can harness the power of heterogeneous platforms and are generally speed comparable to or better than the same applications developed with other mainstream tools such as OpenMP, Intel TBB, Cilk, etc. In particular, FastFlow is able to efficiently support parallel processing on stream at very high frequency and very fine grain (up to tens of nanoseconds per task). FastFlow, for us is above all a laboratory for developing new solutions and applications experience. Almost every day we update the code and publish it on sourceforge SVN.
FastFlow is described in a number of research papers: a list can be found here .