New products, Conferences, Books, Papers, Internet of Things

Archive for December, 2012

IBM Looks Ahead to a Sensor Revolution and Cognitive Computers

From the NYT:

The year-end prediction lists from technology companies and research firms are — let’s be honest — in good part thinly-disguised marketing pitches. These are the big trends for next year, and — surprise — our products are tailored-made to help you turn those trends into moneymakers.

But I.B.M. has a bit different spin on this year-end ritual. It taps its top researchers worldwide to come up with a list of five technologies likely to advance remarkably over the next five years. The company calls the list, “Five In Five,” with the latest released on Monday. And this year’s nominees are innovations in computing sensors for touch, sight, hearing, taste and smell.

Touch technologies may mean that tomorrow’s smartphones and tablets will be gateways to a tactile world. Haptics feedback techniques, infrared and pressure-sensitive technologies, I.B.M. researchers predict, will enable a user to brush a finger over the screen and feel the simulated touch of a fabric, its texture and weave. The feel of objects can be translated into unique vibration patterns, as if the tactile version of fingerprints or voice patterns. The resulting vibration patterns will simulate a different feel, for example, of fabrics like wool, cotton or silk.

The coming sensor innovations, said Bernard Meyerson, an I.B.M. scientist whose current title is vice president of innovation, are vital ingredients in what is called cognitive computing. The idea is that in the future computers will be increasingly able to sense, adapt and learn, in their way.

That vision, of course, has been around for a long time — a pursuit of artificial intelligence researchers for decades. But there seem to be two reasons that cognitive computing is something I.B.M., and others, are taking seriously these days. The first is that the vision is becoming increasingly possible to achieve, though formidable obstacles remain. I wrote a piece in the Science section last year on I.B.M.’s cognitive computing project.

The other reason is a looming necessity. When I asked Dr. Meyerson why the five-year prediction exercise was a worthwhile use of researchers’ time, he replied that it helped focus thinking. Actually, his initial reply was a techie epigram. “In a nutshell,” he said, “seven nanometers.”

Read the whole article here.

EWSN 2014

The European Conference on Wireless Sensor Networks (EWSN) is a leading international conference devoted to exchange of research results in the field of wirelessly networked embedded sensors. The EWSN Steering Committee is seeking proposals for hosting EWSN 2014.

Such proposals should contain:

* Description of the proposed venue (pictures help)
* Reachability of the venue from Europe and the World
* Options for accommodation in the vicinity of the venue, including
approximate room rates
* Options for social events
* Names and short bios of the proposed general chair and program chairs
* Outline of the conference budget including estimated registration fees
* Research and industry activities related to EWSN in the vicinity

Innovative ideas to make the conference attractive to submitters and attendees (e.g., hot focus areas, unusual session formats, forms of interactions, etc.) are particularly welcome and should be outlined in the proposal.

Proposals should be sent by email as a single PDF file containing at most 6 pages to Kay Roemer <roemer@iti.uni-luebeck.de> by January 18, 2013. The EWSN steering committee will review all proposals and may contact the proposers to obtain additional information.

Authors of the successful proposal are expected to present the 2014 venue at EWSN 2013 in Ghent, Belgium.

Since 2004, EWSN has been annually held in February at European destinations including Berlin (Germany), Istanbul (Turkey), Zurich
(Switzerland), Delft (The Netherlands), Bologna (Italy), Cork (Ireland), Coimbra (Portugal), Bonn (Germany), Trento (Italy), and Ghent (Belgium). It is a 2.5-day event with parallel tutorials in the morning of the first day, followed by a single-track technical program.
An important element of the conference is a combined demonstration and poster session with typically 30-50 exhibits. The conference proceedings are published in the Springer LNCS series. In the past, the conference had 100-200 attendees and was typically held on a university campus to keep registration fees low.

Dates:

Submission deadline: January 18, 2013
Notification: February 2013

More on EWSN here.

Mobile Sensor Network: LINC

tn_1377_linc2-1355462709From Postscapes:

LINC is a product from North Carolina engineering firm Techmor that hopes to empower those with no electronics knowledge to create their own mobile sensor networks.

At the heart of the modular system is a LINC Bridge connector containing a battery, wireless radio (Bluetooth 4.0/LE & & Xbee), and microprocessor. A Bridge can accept up to four LINC “Smart Sensors” and one 0-5V analog input sensor. Out of the box there are five available sensor types for you to plug and play into the bridge module; Temperature, Linear Position, Force/Load, Wind Speed and Pressure. More sensor models are expected to be developed after the initial product launch and all come with built-in calibration and unique identifier capabilities.

Once you have the data coming into the sensors and connected to your mobile phone you can use the systems mobile app to view your sensor network output in real-time and either store the data locally or share the collected data through email, text or upload it to the cloud.

More info here.

e-Health Biometric Sensor Platform for Arduino and Raspberry Pi

luis_conectado-smallThe new e-Health Biometric Sensor Platform developed by Cooking Hacks (the open hardware division of Libelium) allows Arduino and Raspberry Pi users to perform biometric and medical applications where body monitoring is needed by using 9 different sensors: pulse, oxygen in blood (SPO2), airflow (breathing), body temperature, electrocardiogram (ECG), glucometer, galvanic skin response (GSR – sweating), blood pressure (sphygmomanometer) and patient position (accelerometer). This information can be used to monitor in real time the state of a patient or to get sensitive data in order to be subsequently analysed for medical diagnosis. Biometric information gathered can be wirelessly sent using any of the 6 connectivity options available: Wi-Fi, 3G, GPRS, Bluetooth, 802.15.4 and ZigBee depending on the application.

If real time image diagnosis is needed a camera can be attached to the 3G module in order to send photos and videos of the patient to a medical diagnosis center. Data can be sent to the Cloud in order to perform permanent storage or visualized in real time by sending the data directly to a laptop or Smartphone. iPhone and Android applications have been designed in order to easily see the patient’s information. Read more.

 

Lapka for iPhone: five sensors to measure the world

“It’s hard to find someone who’d say ‘I’d love to measure radiation’ on a day-to-day basis,” says Marmeladov, but that’s exactly what he intends to convince you of. Each sensor is molded from wood and injection-molded plastic, and looks like it would fit better on the shelf of an Apple Store than in your high school’s science lab. In fact, each sensor plugs into your iPhone’s headphone jack as if it were a Square card reader.

Like Verge editor Ben Popper, Marmeladov discovered that feeling and measuring the invisible things around you can be strange and enlightening. The pack of sensors is about environmental life-logging and keeping a journal of the invisible fields you inhabit every day. Lapka is also about finding the joy in building something cool nobody has tried before. Marmeladov, dressed from head to toe in black and donning a modern mohawk, states proudly: “Our goal is to mix Yves Saint Laurent and NASA together.”

More info here.

Contiki Regression Tests: 9 Hardware Platforms, 4 Processor Achitectures, 1021 Network Nodes

Contiki-ipv6-rpl-cooja-simulationContiki gets regression test framework from Thingsquare Mist with travis integration that lets us test every new commit on 9 hardware platforms, 4 processor architectures, and 1021 emulated network nodes.

Despite its size, Contiki a complex system with multiple layers of interrupts, processes, protothreads, serial port input and output functions, radio device drivers, power-saving duty cycling mechanisms, medium access control protocols, multiple network stacks, fragmentation techniques, self-healing network routing protocols, best-effort and reliable communication abstractions, and Internet application protocols. These run on a wide range of different microprocessor architectures, hardware devices, and is compiled with a variety of C compilers.

Typical Contiki systems also have extreme memory constraints and form large, unreliable wireless networks. How can we ensure that Contiki, with all these challenges, does what it is supposed to do?

Over the years, open source projects have tried different ways to ensure that the code always is stable across multiple platforms. A common approach has been to ask people to test the code on their own favorite hardware in good time before a release. This was the approach that Contiki took a few years ago. But the problem was that it is really hard to get good test coverage, particularly for systems that are inherently networked. Most testers won’t have access to large numbers of nodes and even if they have, tests are difficult to set up because of the size of networks that are needed for testing. Also, since people are more motivated to run tests near a release, there may potentially be large numbers of bugs that are found right before the release. It would be great to be able to find those bugs much earlier.

Many projects do nightly builds to ensure that the source code is kept sane. This is something we have done for a long time in Contiki: the code has been compiled with 5 different C compilers for 12 platforms. But this is not enough to catch problems with code correctness, as the functionality of the system is not tested. Testing the functionality is much more difficult, since it requires us to actually run the code.

Fortunately, Contiki provides a way to run automated tests in large networks with a fine-grained level of detail: Cooja, the Contiki network simulator. But taking this to a full regression test framework took a bit of work.

First, to make scripted simulation setups easier, Cooja author Fredrik Österlind wrote a test script framework for Cooja. Second, Github contributors Rémy Léone and Ilya Dmitrichenko developed a travis plugin for Contiki. And now Contiki gets a new regression test framework from Thingsquare Mist.

More info here.

DOCOMO to launch global M2M platform

NTT DOCOMO, Japan’s leading mobile operator and provider of integrated services centered on mobility, announced that it will become Japan’s first mobile operator to offer a global enterprise platform for the communication lines of wireless machine-to-machine (M2M) systems beginning December 6

The platform will manage communication lines via not only in DOCOMO’s mobile network, including through international roaming, but also those of operators outside of Japan.

The new service, “docomo M2M Platform,” will connect M2M communication lines in over 200 countries with a unified Web interface, which corporate customers will combine with M2M solutions for purposes such as the worldwide management of vehicle or construction machinery fleets.

Conventional M2M platforms must be customized to each local mobile operator’s communication line, thus requiring a separate platform in each country. With docomo M2M Platform, however, customers can eliminate the cost of operating multiple M2M platforms.

The new offering utilizes the M2M platform of Jasper Wireless, a U.S.-based company with a proven track record of providing M2M infrastructure to telecommunications carriers worldwide.

Additional features made possible by docomo M2M Platform’s convenient Web interface include:
- Real-time monitoring of data usage and communication fees
- Remote activation and deactivation of communication lines
- Basic diagnosis of communication failures.

More info here.

Climbing Trillions Mountain: a field guide to the Internet of Things

trillions-book-v1-200x279Widespread machine-to-machine (M2M) communication is bringing about the Internet of Things — or ‘the trillion-node network’, as the authors of this book put it. Trillions: Thriving in the Emerging Information Ecology, which is written by the three principals of MAYA Design (a Pittsburgh-based design consultancy and technology research lab), addresses the problem of how to cope with an internet comprising trillions of nodes, the majority of which do not have a person directly controlling them. Peter Lucas, Joe Ballay and Mickey McManus warn of the chaotic complexity that’s in danger of developing, and offer suggestions as to how to design a digital future in which “The data are no longer in the computers. We have come to see that the computers are in the data“.

The book is built around a mountaineering analogy, with ‘PC Peak’ — encapsulating the personal computing era and the human-centric internet/web — having been scaled. But looming above is the far larger ‘Trillions Mountain’, where, the authors contend, “the design techniques that have served us well on PC Peak will be wholly inadequate for the problems of scale we will soon face”.

The early chapters summarise the route to the post-PC era, including a cautionary tale about a once-great company (DEC) that failed to adapt to an imminent (PC) revolution and paid the ultimate price within a decade of its peak revenue year. The inference here is clear: there will be some notable fallers in the foothills of Trillions Mountain. The next-generation computing landscape, comprising trillions of nodes, is discussed, with the authors stressing the importance of ‘fungible’ devices and ‘liquid’ information — terms borrowed from economics. Fungibility — the free interchange of equivalent goods — is not a widespread feature of today’s IT landscape, with its numerous walled gardens, they say. Liquidity — the free flow of value — is variable: low-level packet switching flows efficiently enough, but higher levels of the information infrastructure are stickier. The third key requirement of the trillion-node computing landscape, say the authors, is a ‘true cyberspace’ comprising persistent digital objects, in contrast to today’s hypertext-based web.

In fact, according to Lucas, Ballay and McManus, quite a few components of today’s IT landscape are poorly architected for the trillion-node future. This includes computers that are platforms for data-siloing applications rather than pure information, the web browser — even the web itself and cloud computing. What we’re heading for, they say, is Complexity Cliff (there’s that mountaineering analogy again) — cascading unforeseen failures in ill-designed complex systems that, for example, “could easily ‘brick’ all the lights in a next-generation skyscraper that uses wireless systems to control illumination. Or the elevators. Or the ventilation”.

Around this point in the book, the authors expound their vision of cloud computing, which turns out to be a pervasive information store built on peer-to-peer networking — they call it the GRIS (the Grand Repository In the Sky), and contrast it with today’s essentially client-server ‘corporate Hindenberg’ clouds that could one day, like the airship, explode along with your data. There are also some rather curmudgeonly digs at the software development community in this chapter, which may not meet with universal approval. For example, a perceived lack of organised professionalism in software engineering (compared to codes of practice for the likes of builders or electricians) is largely laid at the door of the open-source community: “the Internet era has now passed into the hands of a pop culture that is neither formally trained nor intellectually rigorous, and doesn’t particularly care whether its ‘solutions’ have a rigorous engineering basis — as long as they accomplish the task at hand”.

More info here.

Ultralow-power developments target next-gen wireless sensors

Imec__ULP_ADCThe ultrasmall sensors of the future will monitor our health parameters, vehicles, machines and processes, buildings and smart constructions, and the environment. They will operate autonomously for long periods on a small battery, and they will communicate wirelessly. A key factor for their success, therefore, is their low power consumption, which will define the range of applications and functionalities for which they can be used.

At the 38th European Solid-State Circuits Conference in September, Imec and Holst Centre (Eindhoven, Netherlands) presented four ultralow-power developments to drive next-generation sensors and sensor networks: a frequency-shift-keying receiver for body-area networks, a flexible successive-approximation-register A/D converter for wireless sensor nodes, fast start-up techniques for duty-cycled impulse radio receivers, and a design approach targeting subthreshold operation.

ULP receiver for body-area network applications
Imec and Holst have developed a power-efficient receiver for ULP BAN (ultralow-power body-area network) applications. Whereas most transceivers exploit OOK (on-off keying) modulation, the new receiver uses FSK (frequency-shift keying) modulation and is hence less sensitive to interference. The complete receiver, fabricated in 40-nm CMOS technology, consumes 382.5 μW. The sensitivity measured at a bit error rate of 10−3 is –81 dBm for a 12.5-kbit/sec bit rate. The bit rate is scalable up to 625 kbits/sec, enabling a trade-off between sensitivity and bit rate. Taking advantage of the short-range nature of BAN applications, a mixer-first architecture is proposed, leading to a good dynamic range.

Flexible SAR ADC for ULP wireless sensor nodes
Wireless sensor nodes for electroencephalography, electrocardiography, and temperature and pressure monitoring require ULP ADCs for both the sensor-readout interface and the wireless-communication front end. Each of these applications, however, has its own requirements for accuracy and bandwidth. Imec and Holst Centre have realized a flexible, power-efficient SAR (successive approximation register) ADC that designers can use for a variety of applications. The device supports resolutions from 7 to 10 bits and sample rates from dc to 2M samples/sec; the flexibility is achieved by implementing a reconfigurable comparator and a reconfigurable DAC. The chip, in a 90-nm process, occupies 0.047 mm2, and achieves power efficiencies of 2.8- to 6.6-fJ/conversion step at 2M samples/sec and with a 0.7V supply.

More info here.

2012 Internet of Things Awards

iot-award-bannerThe #IoT Awards seek to highlight and celebrate the year’s best projects, companies and ideas helping to create an Internet of Things.

Vote on your favorite Internet of Things projects from this year!

The 9 Categories for this year include:

Connected Product (Body)
Connected Product (Home)
Smart City Application
Environmental Application
Enterprise Application
DIY Project
Open Source Project
Networked Art
Design Fiction

Submissions still welcome until the 19th of December.

More info here.

Follow

Get every new post delivered to your Inbox.

Join 730 other followers