Optimize FPGAs for Software Radio Applications

Recently updated, this handbook reviews the latest FPGA  technology and how it can be put to use in software radio systems. FPGAs offer significant advantages for implementing software radio functions such as digital downconverters and upconverters. These advantages include design flexibility, higher precision processing, lower power, and lower cost. Pentek SDR products that utilize FPGA technology and their applications are also presented.

Get your free download of this handbook now.

Standard Manufacturing Process Mints Nanoporous Chip Materials

A research team from Belgium’s University of Leuven, the National University of Singapore, and Australia's CSIRO has adapted a production process normally reserved for semiconductors to create metal-organic frameworks (MOFs), a class of material that could lay the foundation for advanced microelectronics.

Metal-organic frameworks consist of a nanoporous grid of both organic molecules and metal ions. The material takes shape as the organic molecules push the metal ions apart, forming a regular pattern of small holes (or nanopores) that researchers have compared to a microscopic sponge. With all these pores, the material’s surface area ranges from 1,000 to 5,000 square meters per gram.

"The net result is a structure where almost every atom is exposed to empty space: one gram of MOF crystals has a surface area . . . the size of a football field,” says CSIRO researcher Mark Styles. "Crucially, we can use this vast space to trap other molecules, which can change the properties of a material.”

More Microelectronics Research

New Research Injects Optimism into Quantum Computers

All-Optical Switch Could Push Electronics Out of Future Processors

New Research Pushes Skyrmions Closer to Magnetic Memory Devices

As reported in the journal Nature Materials, the researchers have uncovered a more efficient method for producing thin films of this material, which in the future could be grown directly on tiny electronic circuits. In laboratory experiments, the researchers demonstrated that the material could be made through chemical vapor deposition (CVD), a widely used manufacturing process for thin film semiconductors.

This revelation could represent a major step toward using the technology in microelectronics. “Vapor-phase deposition is already a common method to produce high-tech devices,” says Ivo Stassen, lead researcher from the KU Leuven Center for Surface Chemistry and Catalysis. As a result, new technologies using MOFs can be developed more quickly. Among these are sensors, nanochip components, and high-density batteries.

The massive surface area of the material could also help extinguish beliefs about the physical limits of semiconductor nodes. Manufacturers might be able to squeeze an unprecedented number of transistors into the material's huge surface area without taking up much space. MOFs have also shown potential as low-k dielectric semiconductor materials, which promise to reduce parasitic capacitance, boost switching speeds, and lower heat dissipation in tiny electronic devices.

Professor Rob Ameloot, the other lead researcher from the Center for Surface Chemistry and Catalysis, notes that the lack of a mainstream production process has largely confined MOFs to the laboratory. Until now, researchers have only been able to grow the material using a liquid solvent. Ameloot points out, however, that MOF crystals produced through this process are typically too large and impure for integrated electronics. In addition, using liquid solvents is not ideal for growing MOFs directly on electronic components.

In contrast, the Leuven research team was able to adapt a mass production process to the unique chemistry of MOF thin films. “We first deposit layers of zinc and let them react with the vapor of the organic material,” explains Stassen. “The organic material permeates the zinc, the volume of the whole expands, and it is fully converted into a material with a regular structure and nanopores.”

Stassen said that to refine the procedure, the researchers are collaborating with the Leuven-based semiconductor research center imec, which specializes in nanoelectronics. He notes that, along with imec, the university has submitted patents on the new process.

Imec has been investigating new ways to implant smaller and smaller transistors into computer processors and other electronics. The research center recently unveiled new advances that are laying the groundwork for silicon CMOS devices beyond the 5-nm node. However, the center also noted that it has begun investigating other approaches beyond silicon, such as spintronics and 2D materials, which could produce even smaller nodes.

Ultimately, the potential metal-organic frameworks is in its versatility. In December 2013, researchers from Sandia National Laboratory in New Mexico made one of the first major breakthroughs with the material, proving that they could create MOF thin films that conduct electricity. In a New York Times article, the researchers expressed hope that they would soon be able to customize the material's structure, coding electrical behaviors that are difficult to achieve with normal semiconductors.

Weightless-P Sacrifices Range and Power, Adds Speed and Flexibility

The Weightless SIG, one of a growing number of standards bodies in the market for Internet of Things (IoT) networks, is trying to balance power consumption and transmission strength with its latest wireless standard. The group recently published the Weightless-P standard, which it says provides the reliability and security of a "carrier-grade" network while consuming little power.

The new standard outlines a bidirectional network operates in the sub-GHz spectrum using FDMA/TDMA modulation and occupies 12.5 KHz narrowband channels to help reduce power consumption. To reduce interference and maintain the highest possible capacity, Weightless-P controls transmit power for the downlink and uplink. At the same time, data rates adapt between 200 bits/s to 100 kbits/s depending on the quality of the network connection. It has a roughly 2-km range.

Related

First SDK Released for Weightless-N Low-Power IoT Standard

Low-Power Wide-Area Networks Gain IoT Footholds

White Space Radio Gives New Life To Old Spectrum

Weightless-P has been under development since last August. Weightless SIG revealed that it had partnered with M2Communications (M2COMM), a Taiwan-based networking company, to lead the project. Development kits and hardware for Weightless-P, including base stations and endpoints, will be available in early 2016, according to a statement on the Weightless SIG website.

The Weightless-P standard belongs to a class of low-power wide-area networks (LPWAN) that are being designed for sophisticated industrial systems. In the future, these systems are expected to use thousands of wireless sensors to gather valuable data about manufacturing and infrastructure. Many industrial companies, including General Electric and Siemens, have spent the last few years working to connect machines with powerful servers that can analyze this data.

Weightless-P is the third low-power standard the Weightless SIG has developed. The first, Weightless-W, is designed to operate in the television “white space” spectrum. The more recent standard, Weightless-N, places an emphasis on an extremely wide area of coverage instead of high data rates. Though limited to one-way communications, Weightless-N supports farther range and lower power consumption than Weightless-P, which trades these benefits to a certain extent for higher performance.

In a presentation at the 2014 ARM Tech Symposia, Fabien Petitgrand, a technical staff member at M2COMM, said that low-power, low-cost, and highly reliable connections are the main requirements for industrial IoT systems. He stressed that cellular signals and mesh networks—such as those introduced by Bluetooth and Thread in recent years—consume too much power, and cannot scale as effectively as their LPWAN counterparts.

The low frequency signals used by the Weightless-P standard feed into these requirements of power and cost. With lower frequencies, network operators are able to design smaller, lower-power, and lower-cost antennas. Weightless-P operates with transmit power up to 17dBm to allow operation from coin-cell batteries. When idling, power consumption is below 100 uW.

In recent years, developments in the industrial IoT have caused a flood of new LPWAN standards. Other technologies include SigFox, Dash 7 Alliance Protocol, LoRaWAN, nWave, IEEE 802.11ah, and LTE Cat-M, among others still under development.

What’s the Difference Between Gaming and PC Motherboards?

I have been building PCs from the ground up for decades, and the motherboard has always been a critical part—especially when it comes to gaming PCs. Gamers demand more performance and often push the limits, whereas a regular PC user could care less. Higher frame rates for first-person shooters can be the difference between a top-notch experience and a bland, jerky, annoying gaming session.

Way back in the old days, the motherboards were pretty similar with gaming platforms, including higher-end graphics cards. Things have changed significantly, such that gaming motherboards are much different than a conventional PC motherboard (although the details can be subtle).

Related

Where Has My PC Gone? It’s Gone Gaming

Reworking the PC for the IoT

Where Have All the Gamers Gone?

The easiest way to see the difference is to take a look at a couple of the latest gaming PC motherboards. The first is Super Microcomputer’s (Supermicro) C7Z170-SQ motherboard (Fig. 1) designed for Intel’s latest, sixth-generation, Skylake LGA 1151-based chips.

Those subtle differences start with a high quality PCB of woven E-glass coated with epoxy resin. Coupled with heavier copper traces, this allows a system to deliver improved signal integrity, especially in overclocking conditions. Overclocking is where the processor is driven at higher clock rates than normal. This usually increases operating temperatures that can reduce processor life if not addressed by other means, such as better heat sinks that include water-cooled solutions.

Not all processor chips can be overclocked. Usually a processor chip will be “unlocked” like the Intel Core i7 6600K. Likewise, the motherboard will need to support non-standard clock rates that the user can select.

The capacitors for power supplies are also a major item on gaming motherboards. The C7Z170-SQ uses X5R or X7R class ceramic chip capacitors exclusively, and there are several hundred per motherboard. A bad capacitor can lead to intermittent operation or a completely dead motherboard.

Sockets on gaming motherboards also tend to be of a higher quality than regular motherboards. The C7Z170-SQ uses thicker 15-micron gold plating compared to the typical 2 micron found on typical connectors.

The choice of supporting processor chip set will be an issue with gaming motherboards, since it determines the possible peripheral complement. The C7Z170-SQ uses the Intel Z170 Express Chipset (Fig. 2). It adds gigabit Ethernet, up to 20 additional x1 PCI Express ports, six SATA ports, 10 USB 3.0 ports, 14 USB 2.0 ports and high-definition 7.1 audio. The C7Z170-SQ couples the HD audio with a  Realtek ALC1150 multichannel DAC (digital-to-analog-converter). It exposes only some of the peripheral ports, including all the 6 Gbit/s SATA ports, an Ethernet port, six USB 3.0 ports, and six USB 2.0 ports. There is also a 10 Gbit/s USB 3.1 port provided by an additional chip and linked to a USB Type-C connector on the rear panel.

The diagram shows how the processor chip supports a x16, a pair of x8, or an x8 and two x4 lane configurations. Typically a gaming PC provides multiple x16 slots for video cards via a PCI Express switch chips linked to the x16 interface. The C7Z170-SQ has three PCI-E 3.0 x16 sockets but only one has a x16 connection. One is a x4 and the other a x8 although the x16 is then run as a x8 connection. There are three x4 sockets, although one supports only x1 connections.

Some gaming motherboards support AMD’s Crossfire and NVidia’s SLI (scalable link interface) for combining multiple GPU boards into a single system. This allows the collection of GPUs to drive a single set of displays usually at a higher frame rate and resolution. These types of motherboards normally have a heftier PCI Express switch providing x16 links to three PCI Express x16 sockets. The challenge is to utilize all that bandwidth otherwise the extra potential throughput is wasted.

One change that is showing up in newer motherboards is the M.2 socket. Things get interesting with the M.2 because it supports SATA, x1 PCI Express or x4 PCI Express connections. It requires matching support from the motherboard and the M.2 board. For example, Samsung’s 256 Gbyte SM951 M.2 module (Fig. 3) uses a x4 PCI Express-based NVMe interface. It would not work in a socket that only supports SATA, although it would work in a socket that only had a x1 PCI Express interface, since PCI Express can adapt to the number of available lanes.

The back panel for a gaming motherboard is pretty similar to a conventional motherboard. The Supermicro C7Z170-SQ back panel (Fig. 4). It retains a PS/2 socket for a legacy keyboard or mouse. There are HDMI, DVI and Display Port sockets that are driven by the built-in video support. Most general users would utilize one of these but most gamers would install a video card with its own output. Still, the built-in interfaces can be useful in a multiple screen configuration. Most of the connectors dwarf the tiny USB Type-C connection to the left of the six audio sockets on the right side.

Cooling a Gaming PC

Now we take a step back and look at cooling. This is typically an add-on part of the gaming solution like an additional GPU board. At minimum, a PC processor needs a heat sink, and high-end processors like the Core i7 used by gamers is especially hot. A large heatsink and fan are the minimum, but liquid cooling is often used, as it is more efficient and can handle the additional heat due to overclocking.

One example is Corsair’s Hydro Series H110i GT (Fig. 5). This has a 280-mm radiator with a pair of SP140L PWM fans. It is connected by tubes to the heat exchange unit that sits atop the processor. This has a lower profile than most forced air solutions, although the overall liquid cooling solution is larger.

The H110i GT is Corsair Link-compatible. This allows the system temperature monitor to adjust the color of the LED lighting found on many gaming PC cases.

The chassis is also a major consideration with a gaming PC to handle both the motherboard and cooling system. For example, the Supermicro S5 chassis (Fig. 6) can handle 240-mm cooling systems like the Corsair Hydro Series H105 or 280-mm units like the H110i. The chassis can handle up to nine large fans at once.

Using liquid cooling for the processor is just the starting point. There are other chips that can get rather warm and require cooling, such as the GPU(s), memory, and support chips on the motherboard. Gigabyte has a number of motherboards that tie in the latter, including the GigabyteGA-Z170X-SOC FORCE (Fig. 7)has G1/4 thread fittings on the support heat sinks that allow them to be tied into a system liquid cooling system that can also include the processor, GPU(s), and memory.

The radiator is shared by all the devices within the cooling system. Tubes connect the various components, with liquid flowing through the entire system. GPU video boards require matching support if they are to be included in the cooling system.

The UEFI BIOS

One item that will be found on all new motherboards is a UEFI BIOS. A UEFI BIOS supports larger storage devices and provides incremental functional improvements for new devices. It also supports features like secure boot.

Most non-gaming PC motherboards can be used for playing high-end games, and the addition of a GPU board will help, but a gaming motherboard will be worth the cost if you are looking for the best gaming experience.

Dual-Junction Solar Cell Breaks Efficiency Record

A research team based out of the U.S. National Renewable Energy Laboratory (NREL), has developed a multi-junction solar cell that it says has broken an efficiency record. The research team partnered with the Swiss Center for Electronics and Microtechnology (CSEM) to create the so-called tandem solar cell, combining two layers of semiconductor material to absorb more of the solar spectrum.

In laboratory tests, the research team demonstrated that the solar cell could convert direct sunlight into electricity at 29.8% efficiency. David Young, a senior researcher at the NREL, points out that these results edged out the theoretical limit of 29.4% for mechanically stacked cells. In addition, the device works without having to concentrate sunlight with reflectors, which can increase efficiency in certain solar cells.  

Related

Energy Department Backs Conversion Unit for Concentrating Solar Power Plants

Gallery: 9 Solar-Powered Vehicles from the 2015 World Solar Challenge

What’s The Difference Between Thin-Film And Crystalline-Silicon Solar Panels?

Each research center contributed part of the dual-junction solar cell, which combines III-V and crystalline silicon semiconductors. CSEM scientists developed a silicon sub-cell, on top of which NREL stacked a layer of gallium-indium phosphide (GaInP). The resulting device has a higher efficiency than either material by itself. The record efficiency for an individual crystalline silicon cell is 25.6%, and 20.8% for single-junction GaInP.

“We believe that silicon heterojunction technology [that combines different crystalline semiconductors] is today the most efficient silicon technology for application in tandem solar cells,” says Christophe Ballif, head of photovoltaic research at CSEM.

The researchers gave few additional details in a recent news release, but Young has submitted the team’s research paper to the IEEE Journal of Photovoltaics for publication. The experimental results were published in the journal Progress in Photovoltaics in an article reviewing solar cell designs and the highest efficiency in each category.

Sponsored Power Tips: Solar panel lighting requires a system approach – John Betten, TI Make smart energy smarter with renewable power storage – Ambreesh Tripathi, TI

The review, which includes solar cells up to three times more efficient than conventional solar panels, underlines the fact that efficiency is not everything for commercial solar cells. The highest efficiency ever recorded was 46% for a multi-junction device under highly concentrated sunlight. Soitec, a French company that makes photovoltaic semiconductors, engineered the solar cell in 2014. But the company has since stopped producing this technology.

Despite these higher efficiencies, multi-junction solar cells have been kept out of the commercial market by their complex structure and high manufacturing costs. These devices have typically been limited to satellites and other spacecraft. On the ground, they have to compete with the gradually falling cost of crystalline silicon, the most prevalent material for conventional solar cells. Mass-produced silicon cells are typically less than 20% efficient, but they are relatively cheap compared to the exotic material normally used in multi-junction cells.

The NREL, which is the primary research laboratory for the U.S. Energy Department, is examining several different semiconductor materials for solar cells. Last November, the laboratory found a way to significantly reduce the amount of energy lost to heat in perovskite-based solar cells. The discovery could one day lead to solar cells that convert up to two-thirds of sunlight to electricity.

The dual-junction research was funded in part by the Energy Department’s SunShot Initiative, a program aimed at making solar energy cost-competitive with fossil fuels. Additional funds were provided by the Swiss Confederation and Nano-Tera.ch, a Swiss green technology fund. 

Meet Electronic Design's New Power/Analog Editor

Hello! My name is Maria Guerra and I am the new Technology Editor on Electronic Design covering Analog/Power. I hold a bachelor’s degree in Electrical Engineering from Universidad Metropolitana in Caracas, Venezuela. Upon graduation, life circumstances brought me to the United States, where I earned a master’s degree in Electrical Engineering with a certificate in Wireless Communications at NYU Tandon School of Engineering.

Check out Maria’s Articles

What’s the Difference Between Passive and Active Power-Factor Correctors?

USB Type-C Is Revolutionizing the Market

Wireless-Charging Technologies: Transforming the Mobile World

Over the course of my career, I have been involved in the oil and gas industry. In my hometown, I worked at Pequiven S.A., where I was responsible for estimating electric-circuit variables pertaining to the bottom of oil wells. Variables pertaining to the surface of oil wells were normalized and fed into a neural network for the estimation. In the UK, I worked for Kellogg, Brown, and Root Ltd. (KBR). While working there, one of the responsibilities that I enjoyed the most was giving technical support to the Electrical Engineering Group.

At KBR, I performed power systems studies (e.g., load-flow calculations, short-circuit analysis, motor-starting studies, harmonic studies, etc.) for different projects for both offshore and onshore designs. I communicated the results of those studies by writing technical reports, which I always found quite rewarding and challenging. Now I find myself in a similar situation: researching and reporting.

In my new role, I am going to have the chance to report on the latest information related to emerging technologies in the analog and power electronics world. I am also looking forward to sharing with our readers various learning resources that will help to refresh and reinforce engineering concepts.

I am particularly looking forward to talking about power-semiconductor technology trends. I would like our readers to be aware of what the industry leaders in the power electronics world have to offer in the areas of power management, charging, energy harvesting, power generation, and more. Among the hot topics that I plan to cover are electric/hybrid cars, renewable energy sources, and wireless charging technologies.

I’m based in Electronic Design’s New York City office and can be reached at maria.guerra@penton.com.

Low-Latency Interconnects Plumb Depths of Particle Physics

The European Organization for Nuclear Research (CERN) reported last month that two separate research teams may have discovered a new fundamental particle of matter. Despite the initial enthusiasm and skepticism of CERN physicists, it could take months for computers to comb through the ocean of data produced by the Large Hadron Collider and confirm that the particle actually exists.

As the physicists wait for more information to emerge, CERN’s information technology department is engaged in a different kind of research. Under the organization’s Openlab program, CERN has partnered with educational institutions and technology companies to ensure that, in the future, its computers are fast and efficient enough to sift through increasingly complex particle collisions.

Related

Serial I/O Interfaces Dominate Data Communications

Storage And Computation Capacity Continues To Grow

RapidIO Trade Association Spec Pushes 10 Gbit/s

The latest project involves using new high-speed interconnects to link the thousands of processors in the CERN Data Center. Updating the interconnects will help to optimize its existing processing power—an alternative to adding more processors to increase its blunt force. The processors are designed to run algorithms that filter out uninteresting particle “collision events” in the Large Hadron Collider. The goal is to find unexpected interactions or anomalies that could suggest the presence of new particles.

CERN’s demand for faster processors stems mainly from the enormous amount of data produced by the Large Hadron Collider. The collider smashes particles together at near the speed of light and monitors the particle shards—for instance, those of the Higgs boson particle that was discovered in 2012—spraying in different directions. The shards leave behind traces in space that are recorded by “detectors” that take roughly 14 million snapshots per second of the particle interactions. These frames of data translate into about 3 gigabytes of data per second (about 25 petabytes or 25,000 terabytes per year).

At the core of the new interconnects is the high-speed communications standard RapidIO, which has been widely used to connect processors in cellular base stations and network servers. RapidIO can support up to 20 Gbits/s interconnects directly on processors without network interface controllers, adaptors, and software intervention.

According to Integrated Device Technology Inc. (IDT), which designed the interconnects for CERN, the latency of RapidIO can be as low as 100 ns between switches and less than one microsecond between processing nodes. The standard can be used to establish a connection between chips, boards, and chassis.

Although latency was the central concern, the new interconnect also help to reduce the data center’s massive power demands. Communication between processors on the same chip takes little energy (on the order of microwatts). But according to Barry Wood, principal product applications engineer at IDT, the communication between processors is significantly higher, from hundreds of milliwatts to watts. In supercomputers and servers, that can add up to hundreds of megawatts (MW).

The interconnect project underlines the creativity that has defined CERN’s information technology program over the years. In 1989, for instance, the organization created the World Wide Web as a way to distribute its research data. Building on its World Wide Web Technology, CERN invented grid computing in 2002 to process particle collision data using computer systems from around the world. Today, the Worldwide LHC Computing Grid sends information to 170 data centers in 42 countries.

CERN physicists will keep raising the energy at which the Large Hadron Collider fires protons in search of new particles that could reveal deeper physical laws. During its first two years running, the it fired protons to energies of about four trillion electron volts. But since restarting last June, after a two-year shutdown, CERN physicists have been firing protons with 6.5 trillion electron volts of energy. In order to find more esoteric particles, the collider will have to operate at higher energies, creating even more violent particle collisions. In turn, these collisions produce more data.

Before the shutdown, the data center was storing 1 gigabyte-per-second, with the occasional peak of 6 gigabytes-per-second, according to Alberto Pace, head of the Data and Storage Services in CERN's IT department. But now, “what was once our ‘peak’ will now be considered average, and we believe we could go up to 10 gigabytes-per-second,” he says.

The initial phase of the interconnect project will focus mainly on connecting a small number of processor nodes. But during the three-year research collaboration, IDT and CERN engineers have plans to build large scale computer systems and start using them to analyze data.

Why Integrated Cloud PLM Guarantees Product Lifecycle Visibility

Shipping delays, quality failures and product delays can occur when your engineering and operations teams aren’t in sync. This whitepaper examines how an integrated cloud PLM system helps your product development teams have god-like visibility across their complete product lifecycle.

Evaluating Electrically Insulating Epoxies

Dielectric constant, dissipation factor, dielectric strength, surface and volume resistivity are all fundamental electrical properties of epoxies. How they are measured, what values are desirable and how they react to changes in temperature, fillers and other variables are considered in this paper. The specific composition of resin and curing agents will also affect the electrical properties of a cured epoxy system. Three major types of curing agents are explained in this paper along with their benefits and trade-offs with respect to electrical properties

Wireless Sensor Networks Monitor Active Volcanoes in Japan

A new wireless sensor network being installed in Japan could help scientists more accurately predict the behavior of the country’s most active volcanoes. The system will gather enormous amounts of data used to forecast volcanic activity, identifying when it might be necessary to issue warnings or evacuations.

The sensor network, which will be installed around 47 volcanoes that the Japanese government has selected for around-the-clock observation, will measure several different variables. In addition to the seismic activity that almost always occurs before an eruption, the sensors will monitor gas emissions, topography changes, and vibrations in the air caused by rocks and ash spewing from the volcano.

More on Low-Power Wide-Area Networks

Wireless Sensor Networking for the Industrial IoT

Low-Power Wide-Area Networks Gain IoT Footholds

Weightless-P Supports Higher Performance Low-Power Networks

Japan sits on the western edge of the so-called “ring of fire,” an area around the Pacific Ocean where most of the earth’s volcanic eruptions and earthquakes occur. Sakurajima in southern Japan—which sits only miles from the roughly 600,000 people living in Kagoshima—has been erupting almost continuously for more than 50 years. Other volcanoes in the country are susceptible to “phreatic eruptions,” explosions of super-heated ground water that are notoriously difficult to detect.

In recent years, Japan has placed growing urgency on volcano monitoring. In late 2013, the eruption of Mount Ontakesan in central Japan killed 63 people and prompted the Japan Meteorological Agency, the state weather organization, to review into its volcano forecasting methods. Ultimately, it proposed adding new technology to the agency’s early warning system. It remains unclear whether the agency or a private company commissioned the new sensor network.

Because large populations often live in the shadow of Japan’s volcanoes, the sensor network was designed to provide a constant stream of real-time data. The system was built around LoRa technology, a wireless standard for sending data over long distances while consuming little power from end node batteries. The sensors are based on a LoRa transceiver from Semtech Corp., which it says can provide each sensor with at least five years of battery life. In addition, LoRa is capable of adapting data rates to ensure that data continues flowing from the sensors in spite of radio interference.

The information gathered by the sensors will be transmitted via LoRa gateways to manned monitoring stations located 5-10 km away from the volcanoes. LoRa, also known as LoRaWAN, operates using a chirp spread spectrum radio scheme, sending data through a series of gateways that serve as a bridge between the sensors and network servers.

LoRa is one of a growing number of low-power wide-area networks that are being designed to mine valuable data from advanced industrial systems and infrastructure. The standard can support more than one million uplink devices and up to around one hundred thousand downlink devices per access point. In rural areas the standard can stretch up to 15 km, while in urban areas, it can range from 2 to 5 km.

LoRa’s architecture allows for three kinds of end node devices, depending on the number of uplink and downlink channels. Class-A devices are bidirectional, with one scheduled uplink and two downlink windows. Class-B devices (also bidirectional) have additional downlink windows, while Class-C devices have receive links that are almost always open. The payload and range can be traded off.