Boston Limited experts in lower powered servers unveiled the latest release of Viridis Microserver range. Powered by ARM Cortex A15 quad-core processor. Based on six EnergyCore ECX-2000 cards, the Boston Viridis 2.0 hosts two four-core SoC’s (Server-on-Chip) running at 1.8GHz and consuming as little as 6 Watts of power.
The ARM Cortex A15 quad-core processor, ECX-2000 delivers twice the performance, three times the memory bandwidth, and four times the memory capacity of the earlier ground-breaking ECX-1000.
It is highly scalable thanks to the integrated 80GB Fleet Fabric switch. The embedded Fleet Engine simultaneously provides out-of-band control and intelligence for autonomic operation and power optimization.
Cassandra has been the wildly popular key value (nosql) database due to it's performance and scalability. The announcements for Cassandra 2.0 claim offer more traditional database features as well as means to integrate with large event processing engines.
These features include:
Adapteva announced that the first “beta” units of the 16 core 'super computer' are being shipped to the early kicstarter 'developer' backers. Other backers are said to receive their boards by summer's end “after some final refinements.”
Also Adapteva has now opened up general pre-orders for the 16-core version on its website. While all Kickstarter-bought boards will bear a Zynq-7020 SoC, new pre-orders are configured with a 7010 as standard,
However, newcomers will receive “Gen-1” boards, which will offer slight improvements over earlier versions, such as reduced power consumption and an added serial port three-pin header.
The basic 16-core board going for $99 on the online store , with an expected October delivery date. The company tells us the 64-core version will also be available for public consumption, with pre-orders beginning in Q4 this year.
In Sun Tzu terms tech is like fighting in a marsh, where you have to keep moving or sink.
It wasn't that long that MongoDb was the disruptive technology. Now a blizzard of noSql db's are disrupting the disrupters. Google's Sergio Bossa the reasoning for moving from MongoDB to Cassandra:
As regards what made them look into a different solution:
For good measure he is not a big fan of Scala either http://thegeektalk.com/interviews/sergio-bossa/
With data volumes and pressure to reduce power all increasing, it is back to the future with NCSA380 Petabyte High Performance Storage System. The world’s largest automated near-line data repository which has multiple automated tape libraries, dozens of high-performance data movers, a large 40 Gigabit Ethernet network, hundreds of high-performance tape drives, and about a 100,000 tape cartridges.
The system is being used with the petascale Blue Waters supercomputer making it the world’s largest HPSS now in production,
Perhaps more interestingly is to compare Dart with other languages in which it is doing pretty well
Data Warehousing used to be be the place were legacy data went to die. Now Big Data means that the data warehouse has become the center of online data analysis and a key part of strategic direction.
Teradata is now blending it's Hadoop “best of breed component” approach that honors the best component developers with a well engineered package. This includes:
The Teradata Enterprise Teradata Studio with Smart Loader for Hadoop Access supports Hortonworks and Cloudera.
Teradata SQL-H, which blends Hadoop with standard SQL databases. SQL-H, which supports Hortonworks, is designed to allow analysts to query data anywhere it resides.
Our Raspberry Pi's are are quietly working away as build monitors saving power and money.
3 more options that are popping up offering a variety of other features.
“Resistance is futile ..!!” The major leading memory makers, namely Micron, Samsung and Hynix are co-developing the technology development efforts backed by the Hybrid Memory Cube Consortium (HMC). The technology, called a Hybrid Memory Cube, will stack multiple volatile memory dies on top of a DRAM controller.
These 3 dimensional chips will rely on the relatively new silicon VIA (Vertical Interconnect Access) technology as their inter connect.
The first Hybrid Memory Cube specification will deliver 2GB and 4GB of capacity, providing aggregate bi-directional bandwidth of up to 160GBps compared with DDR3′s 11GBps of aggregate bandwidth and DDR4, with 18GB to 20GB of aggregate bandwidth.
HP as reported before have been hinting at super-dense Arm servers. However instead of just offering ARM HP have announced that their will offer a variety of low power processors from AMD, AppliedMicro, Calxeda (Arm), Intel, and Texas Instruments. All of which will allow the customer to mix and match their performance to energy efficiency needs.
Intel has released its own version of open source software platform Hadoop. This version naturally supports optimization at for intel based processors to boost performance (and sell more intel servers). It will also include Intel Distribution for Pentaho’s range of analytics software, bringing new data mining, analysis, interactive reporting and other capabilities to the distro.
The move is interesting in that it also demonstrates how open-source software like Hadoop helps drive the success of profit-making companies that make the commercial products and provide the technical support that enterprise needs to run it. We've seen similar announcements from Cloudera, Hortonworks, which has just put out a Windows version of HDP, and EMC’s Greenplum. With Hadoop becoming a corporate standard.
Some could see this as a new move by Intel and threaten startups like Hortonworks and Cloudera, but Davis says that this is far from the case. Instead, the company is planning to share these advancements with the Hadoop community at large, having already announced a team up with Red Hat, and so eventually everyone should be able to benefit – providing yet more evidence that Intel is getting into this game solely to find new customers for its chips.
Bina Genomic Analysis Platform, which helps pharmaceutical companies, biotech companies, researchers, and clinicians analyze large amounts of genomic sequencing data. The DNA sequence data from one person taking up half a terabyte of space, drastically increasing the number of datasets requires not only more space, but new software to compare samples to each other and draw conclusions.
The Bina Genomic Analysis Platform works with existing sequencing systems, taking the data produced by those systems and assembling it into a format that is usable for medical discovery and patient care. The product is a combination of hardware and software and pay an annual subscription fee for the software.