RESNA- Why as a Technologist you should know about it?

RESNA - Rehabilitation Engineering and Assistive Technology Society of North America

RESNA, is the premier professional organization dedicated to promoting the health and well-being of people with disabilities through increasing access to technology solutions. RESNA advances the field by offering certification, continuing education, and professional development; developing assistive technology standards; promoting research and public policy; and sponsoring forums for the exchange of information and ideas to meet the needs of our multidisciplinary constituency. 

The organization's legal name is RESNA. Website:

RESNA's goal: "To maximize the health and well being of people with disabilities through technology."

The purpose of RESNA: To contribute to the public welfare through scientific, literary, professional and educational activities by supporting the development,dissemination, and utilization of knowledge and practice of rehabilitation and assistive technology in order to achieve the highest quality of life for all citizens.

About the Organization:

RESNA is a 501 (c) 3 not-for-profit membership association. RESNA's volunteer Board of Directors provides governance and oversight of the organization's activities. The Board establishes several committees for this purpose. RESNA also has a small professional staff, consisting of an Executive Director and core programmatic staff. The organization is headquartered in Arlington, VA.

RESNA Offers:

RESNA offers several programs for members and for the assistive technology professional community at large. Information on all of these programs is available on this website. The programs are listed here and linked to the relevant web pages:

  • Certification LINK
  • Continuing Education LINK
  • Assistive Technology Journal LINK
  • Annual Conference LINK
  • Assistive Technology Standards Development LINK
  • Student Design Competition LINK
  • Student Scientific Paper Competition LINK

International Efforts

RESNA promote the fields of rehabilitation engineering and assistive technology; advocate for access to technology solutions for people with disabilities; and facilitate the exchange of ideas, research, and technologies. For more information about our international alliances, or to become involved, please contact the RESNA office. Email:

RESNA has consultative status at the United Nations (UN) as the world’s leading authority in the fields of rehabilitation engineering and assistive technology. This designation allows RESNA to participate in UN meetings and provide expert testimony on issues related to people with disabilities, assistive technology, and rehabilitation engineering. Most recently, RESNA participated in the General Assembly of the United Nations on September 23, 2013 in New York City to discuss how disability and assistive technology should be mainstreamed in the post-2015 UN Development Agenda.

World Health Organization

RESNA promote the fields of rehabilitation engineering and assistive technology; advocate for access to technology solutions for people with disabilities; and facilitate the exchange of ideas, research, and technologies. For more information about our international alliances, or to become involved, please contact the RESNA office. Email:

RESNA has consultative status at the United Nations (UN) as the world’s leading authority in the fields of rehabilitation engineering and assistive technology. This designation allows RESNA to participate in UN meetings and provide expert testimony on issues related to people with disabilities, assistive technology, and rehabilitation engineering. Most recently, RESNA participated in the General Assembly of the United Nations on September 23, 2013 in New York City to discuss how disability and assistive technology should be mainstreamed in the post-2015 UN Development Agenda.

Alliances All Around

The Alliance of Assistive Technology Professional Organizations is a formal agreement between membership-based professional societies and associations who are working to advance the field of assistive technology and rehabilitation engineering to benefit people with disabilities and functional limitations of all ages. The intent of this Alliance is to promote communication and information exchange, support each other’s efforts, and speak with a more unified voice on international issues. Participants in the Alliance will work to identify areas of common concern, develop joint strategies, and promote dialogue and collaboration that benefits its memberships. The agreement reflects a joint commitment to improving access to assistive technology through research, policy advocacy, training, information sharing, and knowledge translation. In addition to RESNA, signatories include the national or regional organizations representing the field in Europe, Australia, Japan, South Korea, and Taiwan.

Organizations are:

  • Association for the Advancement of Assistive Technology in Europe LINK
  • Australian Rehabilitation and Assistive Technology Association LINK
  • Rehabilitation Engineering Society of Japan LINK
  • Rehabilitation Engineering Society of Korea LINK
  • Taiwan Rehabilitation Engineering and Assistive Technology Society (TREATS)

Assistive Technology Journal

Assistive Technology, the official journal of RESNA, is an applied, scientific publication in the multi-disciplinary field of technology for people with disabilities. The journal's purpose is to foster communication among individuals working in all aspects of the assistive technology area including researchers, developers, clinicians, educators and consumers.  

RESNA members receive an online subscription to the journal as part of their membership.  Members may also receive a printed copy of the AT Journal for an additional $30 annual fee. The journal is published by Taylor and Francis. 

The AT Journal considers papers from all assistive technology applications. Only original papers are accepted. Technical notes describing preliminary techniques, procedures, or findings of original scientific research may also be submitted.  Letters to the Editor are welcome and books for review may be sent by authors or publishers

Continue Reading

Assistive Technology and Why should you build one?

Assistive Technology (AT) and Why should you build one?

Assistive technology (AT) is any item, piece of equipment, software program, or product system that is used to increase, maintain, or improve the functional capabilities of persons with disabilities. Assistive technology is an umbrella term that includes assistive, adaptive, and rehabilitative devices for people with disabilities and also includes the process used in selecting, locating, and using them. People who have disabilities often have difficulty performing activities of daily living (ADLs) independently, or even with assistance. ADLs are self-care activities that include toileting, mobility (ambulation), eating, bathing, dressing and grooming. Assistive technology can ameliorate the effects of disabilities that limit the ability to perform ADLs. Assistive technology promotes greater independence by enabling people to perform tasks that they were formerly unable to accomplish, or had great difficulty accomplishing, by providing enhancements to, or changing methods of interacting with, the technology needed to accomplish such tasks. For example wheelchairs provide independent mobility for those who cannot walk and assistive eating devices can enable people who cannot feed themselves, to do so. Due to assistive technology, people with disabilities have an opportunity of a more positive and easygoing lifestyle, with an increase in "social participation," "security and control," and a greater chance to "reduce institutional costs without significantly increasing household expenses." Assistive technology could be implemented in various forms and in varied domain specificity.

  • AT can be low-tech: communication boards made of cardboard or fuzzy felt.
  • AT can be high-tech: special-purpose computers.
  • AT can be hardware: prosthetics, mounting systems, and positioning devices.
  • AT can be computer hardware: special switches, keyboards, and pointing devices.
  • AT can be computer software: screen readers and communication programs.
  • AT can be inclusive or specialized learning materials and curriculum aids.
  • AT can be specialized curricular software.
  • AT can be much more—electronic devices, wheelchairs, walkers, braces, educational software, power lifts, pencil holders, eye-gaze and head trackers, and much more.

Assistive technology helps people who have difficulty speaking, typing, writing, remembering, pointing, seeing, hearing, learning, walking, and many other things. Different disabilities require different assistive technologies.

Types of Impairments

Mobility impairments

This technology is needed in cases when walking is difficult or impossible due to illness, injury, or disability. This technology is focused on enhancing a person's motor and movement skills. This does not necessarily cure the impairment but overcome the restriction of a person's ability. Currently, devices available for mobility are:  Wheelchairs, Transfer devices, Walkers, Prosthesis and others.

Visual impairments

Visual impairment, also known as vision impairment or vision loss, is a decreased ability to see to a degree that causes problems not fixable by usual means, such as glasses. Some also include those who have a decreased ability to see because they do not have access to glasses or contact lenses. Visual impairment is often defined as a best corrected visual acuity of worse than either 20/40 or 20/60. The term blindness is used for complete or nearly complete vision loss. Visual impairment may cause people difficulties with normal daily activities such as driving, reading, socializing, and walking. The World Health Organization (WHO) estimates that 80% of visual impairment is either preventable or curable with treatment.

Examples of assistive technology for visually impairment include Screen readers, Braille and braille embossers, Refreshable braille display, Desktop video magnifier, Screen magnification software, Large-print and tactile keyboards, Navigation Assistance, Wearable Technology and others.

Accessibility software

In human–computer interaction, computer accessibility (also known as accessible computing) refers to the accessibility of a computer system to all people, regardless of disability or severity of impairment, examples include web accessibility guidelines. Another approach is for the user to present a token to the computer terminal, such as a smart card, that has configuration information to adjust the computer speed, text size, etc. to their particular needs. This is useful where users want to access public computer based terminals in Libraries, ATM, Information kiosks etc. The concept is encompassed by the CEN EN 1332-4 Identification Card Systems - Man-Machine Interface. This development of this standard has been supported in Europe by SNAPI [wiki] and has been successfully incorporated into the Lasseo specifications, but with limited success due to the lack of interest from public computer terminal suppliers.

Hearing impairments

The deaf or hard of hearing community has a difficult time to communicate and perceive information as compared to hearing individuals. Thus, these individuals often rely on visual and tactile mediums for receiving and communicating information. The use of assistive technology and devices provides this community with various solutions to their problems by providing higher sound (for those who are hard of hearing), tactile feedback, visual cues and improved technology access. Individuals who are deaf or hard of hearing utilize a variety of assistive technologies that provide them with improved access to information in numerous environments. Most devices either provide amplified sound or alternate ways to access information through vision and/or vibration. These technologies can be grouped into three general categories: Hearing Technology, alerting devices, and communication support.

Most common devices for this includes: Hearing aids, Assistive listening devices, Amplified telephone equipment and others.

Cognitive impairments

Assistive Technology for Cognition (ATC) is the use of technology (usually high tech) to augment and assist cognitive processes such as attention, memory, self-regulation, navigation, emotion recognition and management, planning, and sequencing activity. Systematic reviews of the field have found that the number of ATC are growing rapidly, but have focused on memory and planning, that there is emerging evidence for efficacy, that a lot of scope exists to develop new ATC.

Examples: NeuroPage which prompts users about meetings, Wakamaru, which provides companionship and reminds users to take medicine and calls for help if something is wrong, and telephone Reassurance systems. Developments to combat the cognitive impairment includes: Memory aids, Educational software

Computer accessibility

One of the largest problems that affect people with disabilities is discomfort with prostheses. An experiment performed in Massachusetts utilized 20 people with various sensors attached to their arms. The subjects tried different arm exercises, and the sensors recorded their movements. All of the data helped engineers develop new engineering concepts for prosthetics.

Assistive technology may attempt to improve the ergonomics of the devices themselves such as Dvorak and other alternative keyboard layouts, which offer more ergonomic layouts of the keys. Assistive technology devices have been created to enable people with disabilities to use modern touch screen mobile computers such as the iPad, iPhone and iPod touch. The Pererro is a plug and play adapter for iOS devices which uses the built in Apple VoiceOver feature in combination with a basic switch. This brings touch screen technology to those who were previously unable to use it. Apple, with the release of iOS 7 had introduced the ability to navigate apps using switch control. Switch access could be activated either through an external bluetooth connected switch, single touch of the screen, or use of right and left head turns using the device's camera. Additional accessibility features include the use of Assistive Touch which allows a user to access multi-touch gestures through pre-programmed onscreen buttons.

For users with physical disabilities a large variety of switches are available and customizable to the user's needs varying in size, shape, or amount of pressure required for activation. Switch access[wiki] may be placed near any area of the body which has consistent and reliable mobility and less subject to fatigue. Common sites include the hands, head, and feet. Eye gaze and head mouse systems can also be used as an alternative mouse navigation. A user may utilize single or multiple switch sites and the process often involves a scanning through items on a screen and activating the switch once the desired object is highlighted.

Continue Reading

Alpha Composting Explained

alpha composting explained

How mining works on the blockchain?

Alpha Compositing is the process of combining an image with a background to create the appearance of partial or full transparency. It is often useful to render image elements in separate passes, and then combine the resulting multiple 2D images into a single, final image called the composite. For example, compositing is used extensively when combining computer-rendered image elements with live footage.

In order to combine these image elements correctly, it is necessary to keep an associated matte for each element. This matte contains the coverage information—the shape of the geometry being drawn—making it possible to distinguish between parts of the image where the geometry was actually drawn and other parts of the image that are empty.

Alpha compositing is a common image processing routine used to blend two or more images to create a final composite image. Alpha compositing is built upon the concept of layers — each image used in the composite image has a certain hierarchical layer. The image’s alpha channel determines how much of the images in layers underneath it can be seen at its own layer. In other words, the alpha channels control the transparency of the image, and alpha compositing uses the alpha channel to appropriately blend this image with another to exhibit this transparency.

The following picture from squarespace shows how to create advance visual graphics easily by super-imposing:

Composting Basics:

In a 2D image element, which stores a color for each pixel, additional data is stored in the alpha channel with a value between 0 and 1. A value of 0 means that the pixel does not have any coverage information and is transparent; i.e. there was no color contribution from any geometry because the geometry did not overlap this pixel. A value of 1 means that the pixel is opaque because the geometry completely overlapped the pixel.

If an alpha channel is used in an image, there are two common representations that are available: straight (unassociated) alpha, and premultiplied (associated) alpha.

With straight alpha, the RGB components represent the color of the object or pixel, disregarding its opacity.

With premultiplied alpha, the RGB components represent the color of the object or pixel, adjusted for its opacity by multiplication. A more obvious advantage of this is that, in certain situations, it can save a subsequent multiplication (e.g. if the image is used many times during later compositing). However, the most significant advantages of using premultiplied alpha are for correctness and simplicity rather than performance: premultiplied alpha allows correct filtering and blending. In addition, premultiplied alpha allows regions of regular alpha blending and regions with additive blending mode to be encoded within the same image.

Assuming that the pixel color is expressed using straight (non-premultiplied) RGBA tuples, a pixel value of (0, 0.7, 0, 0.5) implies a pixel that has 70% of the maximum green intensity and 50% opacity. If the color were fully green, its RGBA would be (0, 1, 0, 0.5).

However, if this pixel uses premultiplied alpha, all of the RGB values (0, 0.7, 0) are multiplied by 0.5 and then the alpha is appended to the end to yield (0, 0.35, 0, 0.5). In this case, the 0.35 value for the G channel actually indicates 70% green intensity (with 50% opacity). Fully green would be encoded as (0, 0.5, 0, 0.5). For this reason, knowing whether a file uses straight or premultiplied alpha is essential to correctly process or composite it.

It is often said that associativity is an advantage of pre-multiplied alpha blending over straight alpha blending, but both are associative. The only important difference is in the dynamic range of the colour representation in finite precision numerical calculations (which is in all applications): premultiplied alpha has a unique representation for transparent pixels, avoiding the need to choose a "clear color" or resultant artefacts such as edge fringes (see the next paragraphs). In other words, color information of transparent pixels is lost in premultiplied alpha, as the conversion from premultiplied alpha to straight alpha is undefined for alpha equal to zero. Premultiplied alpha has some practical advantages over normal alpha blending because interpolation and filtering give correct results.

Ordinary interpolation without premultiplied alpha leads to RGB information leaking out of fully transparent (A=0) regions, even though this RGB information is ideally invisible. When interpolating or filtering images with abrupt borders between transparent and opaque regions, this can result in borders of colors that were not visible in the original image. Errors also occur in areas of semi-transparancy because the RGB components are not correctly weighted, giving incorrectly high weighting to the color of the more transparent (lower alpha) pixels.

Premultiplication can reduce the available relative precision in the RGB values when using integer or fixed-point representation for the color components, which may cause a noticeable loss of quality if the color information is later brightened or if the alpha channel is removed. In practice, this is not usually noticeable because during typical composition operations, such as OVER, the influence of the low-precision colour information in low-alpha areas on the final output image (after composition) is correspondingly reduced. This loss of precision also makes premultiplied images easier to compress using certain compression schemes, as they do not record the color variations hidden inside transparent regions, and can allocate fewer bits to encode low-alpha areas.

With the existence of an alpha channel, it is possible to express compositing image operations using a compositing algebra. For example, given two image elements A and B, the most common compositing operation is to combine the images such that A appears in the foreground and B appears in the background. This can be expressed as A over B. In addition to over, Porter and Duff defined the compositing operators in, held out by, atop, and xor (and the reverse operators rover, rin, rout, and ratop) from a consideration of choices in blending the colors of two pixels when their coverage is, conceptually, overlaid orthogonally:

Alpha Blending

Alpha blending is the process of combining a translucent foreground color with a background color, thereby producing a new blended color. The degree of the foreground color's translucency may range from completely transparent to completely opaque. If the foreground color is completely transparent, the blended color will be the background color. Conversely, if it is completely opaque, the blended color will be the foreground color. Of course, the translucency can range between these extremes, in which case the blended color is computed as a weighted average of the foreground and background colors.

Alpha blending is a convex combination of two colors allowing for transparency effects in computer graphics. The value of alpha in the color code ranges from 0.0 to 1.0, where 0.0 represents a fully transparent color, and 1.0 represents a fully opaque color. This alpha value also corresponds to the ratio of "SRC over DST" in Porter and Duff equations.

For image and video composition algorithms, it is an important process and can do wonders to the graphic subjects it is applied to. It is a part of all major graphic processing applications from very simple apps to professional ones. It is used in your favorite image editing app even when you don't realize this is happening in background.

Continue Reading

Wiki Science Competition 2017

The Wiki Science Competition (WSC) is an international photo contest for the sciences, organised by Wikimedia this November. Wikimedia is the movement behind Wikipedia, the free encyclopaedia – a global collaboration authored by volunteers.

Our culture cannot be imagined without science. And in our visual world, it’s not enough to just speak about science: we must also show it. For this, we need photographers to capture it and to share it. The WSC was created to encourage the creation and, especially, the free sharing of all sorts of imagery about the sciences.

The WSC happens at two levels: national and international. It means that in many countries there will be a national contest with its own jury and prizes (more from here). Winning pictures will advance to the international final, and will be judged by the international jury to chose the overall winners. For countries where there is no national organising team, a second international jury will take their place and chose the national candidates for the international final: the goal is that all the photographers in the world have the possibility to participate in the competition.

Any support is welcome: from uploading a single image, or spreading the word about the contest, to becoming a partner or sponsor. This November, cultural heritage will be in the spotlight on Wikipedia! Check the rules and access the upload forms from here.

Image categories

The competition has five image categories: People in Science, Microscopy images, Non-photographic media, Image sets, and General category. Some examples of images submitted so 

If your images are suitable for a scientific article or to illustrate an encyclopedia or newspaper article, then they are also suitable for this competion.

  • The Wiki Science Competition 2017 will be held from November 1st to December 15th. Only the images uploaded during this period will be eligible for the competition, except when a local competition is held in a different time-frame. Times for the local competitions are listed here.
  • Everyone is allowed to add images, except the members of the jury. There’s no limit to the number of images a participant can upload.
  • The uploader must be the author of the photo, or in the case of institutional uploads, the representative of the organization. You can only participate with your own images, or with images of which you are one of the authors (i.e. you must own the copyright). In the latter case, all the co-authors’ names must be provided.
  • For the photo to be considered scientific, it needs to have a good description. Every photo must have an English language description, but descriptions in other languages are also welcomed. The description should provide information about whatis on the image, how and where; it was made, and what is important to notice.
  • The images must be published under a free use license or as public domain. The possible licenses are CC-BY-SA 4.0CC-BY 4.0CC0 1.0, and similar.
  • As this is a photo competition, we expect images of good quality and size. The image size should be at least 2 megapixels, unless the technology used doesn’t allow it. The bigger the file, the better.
  • Files will first compete at the country level and up to five finalists in each of the competition categories per country may advance to the international final.

The competition is not limited to only “classical” photographs, as images in science can come at many shapes and forms. We do accept even computer generated images and we are fully aware that drawing specific boundaries to various categories is nearly impossible. It will be even possible to add thematically linked images as sets: images added as one set would compete as one image for a prize in their respective category. Sets start from 2 linked images and may include up to 10 images. The author must provide in what category the image is competing, but the jury has the right to change that category if needed.

The preferred file formats are .jpg for images, .webm or .ogg for video files and .png for computer generated files. Wikimedia Commons is now also starting to support .stl format (3D files). You should always try to provide the best possible quality. If the images are too small, consider presenting them as a compilation in one file.

Images are collected to Wikimedia Commons, that is an online repository of free-use images, sound, and other media files. To add the files, it is necessary to first create an account there. You could use your own name as an username. All alphabets are supported. Please link your e-mail with your account so that it would later be possible to contact you if needed. You can see the files you have added under the link “Uploads”.

Besides the five competition categories, Wikimedia Commons also uses its own category system to link together similar files. You may skip adding those categories, but they would later be added to make images more easily findable. Still, one of the main goals of this competition is to spread scientific knowledge and making more scientific images publicly available has a great part in it.

Category system inside Wikimedia Commons is built on a principle that images should be placed only in the most specific categories there are. This is nessesary for sorting the images. Examples may be SEM images from Tallinn University of TechnologyMulti-walled carbon nanotubesVideos of Caenorhabditis elegansDallmann laboratoryFossils of IndiaOrnithologists from ItalyArchaeological bog finds, etc. One image my be placed into many distinct categories (like this image has been placed in the categories of “Lepidoptera antennae”, “Scanning electron microscopic images of Arthropoda” and “Aglais io anatomy”). Categories may be added and changed after the image has been added to Commons.

Image descriptions can be altered after the files have been added. The goal is to have as good descriptions as possible. Feel welcomed to upgrade the descriptions you have written.

After images have been added to Wikimedia Commons, they could also be used inside every Wikipedia language version and added to wikiarticles and be used within other Wikimedia projects as well (Wikiversity, Wikivoyage, Wikiquote, Wikidata, etc).

Users in Wikimedia Commons can also give images some special statuses. Like some outstanding images may be selected among Featured pictures or be given Quality image status.

Check out more about participating from the official source link

Organizing teams

Coordinator: secretary of WMAU

Coordinator: Andrea Kareth

Coordinator: Asen Stefanov

Coordinator: Francisco Carvalho Venancio

Coordinators: Carlos Figueroa & Marco Correa Pérez

Czech Republic
Coordinator: Josef Klamo

Coordinator: Reem Al-Kashif

Coordinator: Ivo Kruusamägi

Coordinator: Mikheil Chabukashvili

Coordinator: Rebecca O’Neill

Coordinator: Alessandro Marchetti

Coordinator: Bibigul Makazhanova

Coordinator: Edgars Lecis

Coordinator: Paweł Marynowski

Coordinator: Dmitry Zhukov

Saudi Arabia
Coordinator: Ahmed Al-Elq

Coordinator: Jelena Andreja Radakovic

Coordinator: Rubén Ojeda

Coordinator: ShangKuan Liang-chih

Coordinators: Athikhun Suwannakhan & Taweetham Limpanuparb

Coordinator: Ksenia G

Coordinator: John Sadowski

Special prize for China

A special prize will be offered to the best image of Wiki Science Competition 2017 from the People’s Republic of China.

A special prize will be offered to the best image of Wiki Science Competition 2017 from the People’s Republic of China. The prize will be offered by juror Alessandro Marchetti as a personal show of gratitude for his two year stay in this country. A winner will be selected from the Chinese finalists, and he or she will be offered a short weekend in the West Lake UNESCO World Heritage Site, including a free 2nd class train ticket back and forth to Hangzhou, Zhejiang, dinner at the restaurant and a one-night stay at a nice international youth hostel close to the scenic lake area. For people living close to the area who can come in one day, a simple back and forth ticket and a 1000 yuan cash prize can be offered. In case no Chinese picture will qualify for the final round, this special national prize can still be awarded, unless the quality of all images is considered too low.

Read More
Continue Reading

Cryptocurrency and why cybercriminals love it

Ever pretend you know what your friends are talking about because you want to sound smart and relevant—and then trap yourself in a lie?

Okay no problem there, so the next time someone asks, “What is cryptocurrency, anyway?” instead of awkwardly shrugging, be prepared to dazzle them with your insider knowledge.

What is cryptocurrency

Cryptocurrency, in a nutshell

In its simplest form, cryptocurrency is digital money. It’s currency that exists in the network only—it has no physical form. Cryptocurrency is not unlike regular currency in that it’s a commodity that allows you to pay for things online. But the way it was created and managed is revolutionary in the field of money. Unlike dollars or euros, cryptocurrency is not backed by the government or banks. There’s no central authority.

If that both excites and scares you, you’re not alone. But this technology train has left the station. Will it be a wreck? Or will it be the kind of disruptive tech that democratizes the exchange of currency for future generations?

Let’s take a closer look at what cryptocurrency is, how it works, and what are the possible pitfalls.

cryptocurrency and regular money

What makes cryptocurrency different from regular money?

If you take away all the techno-babble around cryptocurrency, you can reduce it down to a simple concept. Cryptocurrency is entries in a database that no one can change without fulfilling specific conditions. This may seem obtuse, but it’s actually how you can define all currency. Think of your own bank account and the way transactions are managed—you can only authorize transfers, withdrawals, and deposits under specific conditions. When you do so, the database entries change.

The only major difference, then, between cryptocurrency and “regular” money is how those entries in the database are changed. At a bank, it’s a central figure who does the changing: the bank itself. With cryptocurrency, the entries are managed by a network of computers belonging to no one entity. More on this later.

Outside of centralized vs. decentralized management, the differences between cryptocurrency and regular currency are minor. Unlike the dollar or the yen, cryptocurrency has one global rate—and worth a lot. If you go with the speculation, then it is bound to cross $25000 by January 2018.

Don't go with the seemingly promising numbers, it still remains the most intangible bubble that is most likely to burst anytime soon.

How does cryptocurrency work?

Cryptocurrency aims to be decentralized, secure, and anonymous. Here’s how its technologies work together to try and make that happen.

Remember how we talked about cryptocurrency as entries in a database? That database is called the blockchain. Essentially, it’s a digital ledger that uses encryption to control the creation of money and verify the transfer of funds. This allows for users to make secure payments and store money anonymously, without needing to go through a bank.

Information on the blockchain exists as a shared—and continuously reconciled—database. The blockchain database isn’t stored in a single location, and its records are public and easily verified. No centralized version of this information exists for a cybercriminal to corrupt. Hosted by millions of computers simultaneously, its data is accessible to anyone on the Internet.

So how, exactly, is cryptocurrency created and maintained on the blockchain? Units are generated through a process called mining, which involves harnessing computer power (CPU) to solve complicated math problems. All cryptocurrencies are maintained by a community of miners who are members of the general public that have set up their machines to participate in validating and processing transactions.

And if you’re wondering why a miner would choose to participate, the answer is simple: Manage the transactions, and earn some digital currency yourself. Those that don’t want to mine can purchase cryptocurrency through a broker and store it in a cryptocurrency wallet.

Cryptocurrencies Present today

Bitcoin was the first and remains the most popular cryptocurrency, others saw its potential and soon jumped on the bandwagon. Litecoin was developed in 2011, followed by Ripple in 2012. In 2015, Ethereum joined the fray and has become the second most-popular cryptocurrency. According to CoinMarketCap, there are now more than 1,000 cryptocurrencies on the Internet.

Cryptocurrency’s popularity on the Internet soon bled into other real-world applications. Japan has adopted Bitcoin as an official currency for commerce. Banks in India are using Ripple as an alternative system for transactions. JP Morgan is developing its own blockchain technology in partnership with Quorum, an enterprise version of Ethereum.

However, as with any new and relatively untested technology, the cybercriminals wanted in for themselves. And it wasn’t long before Bitcoin and other cryptocurrencies fell victim to their own democratic ideals.

cryptocurrency abuse

As secure as a Bitcoin address is, the application of its technology is often fumbled; usually by unpracticed programmers looking to get in on the action and creating faulty code. Fundamentally, the system is superior to centralized database systems, but poor coding practices among its thousands of practitioners have created a multitude of vulnerabilities. Like vultures to carrion, cybercriminals flocked to exploit. According to Hacked, an estimated 10 to 20 percent of all Bitcoin in existence is held by criminals.

While cryptocurrency was initially hailed as the next big thing in money, a savior for folks who just lost everything in steep recession (but watched as the banks that screwed them over walked away unscathed), a hack in 2011 showed how insecure and easily stolen cryptocurrency could be. Soon, the criminal-minded rushed in, looking to take advantage of the cheap, fast, permission-less, and anonymous nature of cryptocurrency exchange. Over the last nine years, millions of Bitcoin, worth billions of dollars, have been stolen—some events so major that they drove people to suicide.

On a smaller but much more frequent scale, cryptocurrency is used on the black market to buy and sell credit card numbers and bot installs, fund hacktivism or other “extra-legal” activity, and launder money. It’s also the payment method of choice for ransomware authors, whose profits are made possible by collecting money that can’t be traced. Certainly makes getting caught that much more difficult.

ransom note asking for bitcoin
Ransom note asking for Bitcoin

And if that weren’t enough to call cryptocurrency unstable, the process of mining itself is vulnerable and has already attracted some high-profile hacks. Services such as CoinHive allow those that deploy it to mine the CPU of their site visitors—without the visitors’ knowledge or permission. This process, known as cryptojacking, is robbery-lite: Users may see an impact to their computer’s performance or a slight increase in their electric bill, but are otherwise unaffected. Or that is, they were, until cybercriminals figured out how to hack CoinHive.

Future applications

So where does that leave us with cryptocurrency? Surely its popularity is skyrocketing and its value is spiking so hard it could win a gold medal for beach volleyball at the Olympics. But is it a viable, safe alternative to our current currencies? Cryptocurrency could democratize the future of money—or it could end up in technology hell with AskJeeves and portable CD players.

We can see the technological applications for the future that demonstrate the clear advantages of cryptocurrency over our current system. But right now, cryptocurrency is good in theory, bad in practice. Volatile and highly hackable, we’ll have to move to create security measures that can keep up with the development of the tech, otherwise cybercriminals will flood the market so heavily that it never moves beyond the dark web.

Continue Reading

Blockchain technology: For cryptocurrency and more

Imagine a place where you can safely store all your personal information and only you decide who has access to it. You can choose which parts of that information you want to share, and you can just as easily revoke that access.

If this place ever comes into existence, I am willing to bet it will be built on blockchain technology.

Blockchain technology is still very much in development, but those in the know are convinced it will change many markets and industries. So, after delving into the workings of blockchain and crypto-currency, it’s time to have a closer look at what blockchain technology can do outside the realm of cryptocurrencies. Most of these possibilities take the form of smart contracts.

smart contract

What is a smart contract?

The expression “smart contracts” was coined by Nick Szabo long before blockchain technology was refined. He envisioned a technology meant to replace legal contracts, where the terms of a contract could be digitized and automated. An action (payment) could be completed as soon as the condition (delivery) was met.

After the introduction of blockchain, the term “smart contract” was used more widely as software that runs computations on the blockchain.

As a quick reminder, the blockchain is defined as a distributed, decentralized, cryptographically-secured ledger, where each new block contains a reference to the previous block, as well as all the confirmed “transactions” since that previous block was approved.

I use the term transactions lightly here since it would seem to imply that we are still discussing crypto-currency, which is not the case. We call them transactions because of the protocols that are in place to determine whether a contract is considered fulfilled.

Today, a smart contract can be any kind of software, as long as it’s based on blockchain technology. It can be used not only to complete “transactions,” but to secure data. A smart contract could specify that your physician has access to your medical history, but she can’t see your financial history.

developers of blockchain

Some early blockchain technology developers

Although the use of the blockchain technology for other applications is still in the early stages, we are seeing some promising developments. For example, the Ethereum Project advertises itself as a decentralized platform that runs smart contracts: applications that run exactly as programmed, without any possibility of downtime, censorship, fraud, or third-party interference. A list of 850 apps built on the Ethereum platform can be found at

One of the best known Ethereum-based apps is Augur, which uses a blockchain-based betting system to use the knowledge of the masses in order to predict upcoming events.

IBM is involved in some try-outs with major global companies like Maersk based on the Hyperledger Fabric. Hyperledger Fabric is a blockchain framework that provides a foundation for developing applications or solutions with a modular architecture. It was designed by IBM and Digital Asset as a technology to host smart contracts called “chaincode” that comprise the application logic of the system.

The potential future applications of this technology are endless, from implementing a blockchain ledger in order to streamline management operations and approvals to moving elections online (and guaranteeing secure votes, as it would take an insane amount of computer power to hack). Still, as with any new tech, there are both golden opportunities and potential for corruption.

applications: blockchain

Some positive Applications of smart contracts

Here are some examples of how companies can benefit from using smart contracts. Using blockchain technology, they could:

  • Design a fully-automated supply chain management system. When a certain condition is reached, the appropriate action is taken. Imagine a factory that automatically orders supplies when it threatens to run out of them.
  • Manage huge paper trails. Each step in the paper trail can be added as a new block in the chain, and checks can be placed to ensure all conditions have been met that are needed to proceed.
  • Exchange vital business information in real time. Every node can contribute to and access all the information in the blocks.
  • Eliminate the middleman when dealing with others. The parties can interact directly and securely, by relying on the blockchain technology.
  • Eliminate fraud. Irreversibility makes it fraud-resistant. In a proper setup, there is no way to make unauthorized changes in already approved blocks.

pitfalls: blockchain

Some potential pitfalls of smart contracts

Reasons why companies might shy away from using blockchain technology for certain parts of their business include:

  • The content of the contracts is visible to all participants. There are some parts of your business that are not suitable for public knowledge. So there may be a need to encrypt certain data.
  • It’s impossible to correct errors. You would have to reverse the contract once a faulty one has been approved.
  • Long development and implementation is needed to replace existing solutions on a large scale. This may improve when we are more well-versed in applying this technology.
  • If personally identifiable information needs to be stored, this could break local or international regulations. For example, smart contracts would have a hard time complying with privacy laws like the upcoming GDPR.
  • A fully distributed network offers a larger surface for hackers. Remember that all the nodes have access to all the information. So it could pose extra risks if a hacker can access a node or pretend to be one.

The development of the blockchain is expected to cause a revolution similar to the one brought to us by the Internet. It may take some time for smart contracts to conquer the corporate world, but the ball is rolling. If you want to be ready for the future, especially if you work in industries where value transactions take place, it’s a good idea to start learning more about blockchain technology and smart contracts.

Continue Reading

Blockchain Technology

Googlelle: Blockchain

blockchain, originally block chain, is a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block typically contains a hash pointer as a link to a previous block, a timestamp and transaction data. By design, blockchains are inherently resistant to modification of the data. Harvard Business Review defines it as "an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way." For use as a distributed ledger, a blockchain is typically managed by a peer-to-peer network collectively adhering to a protocol for validating new blocks. Once recorded, the data in any given block cannot be altered retroactively without the alteration of all subsequent blocks, which requires collusion of the network majority.

Blockchains are secure by design and are an example of a distributed computing system with high Byzantine fault tolerance. Decentralized consensus has therefore been achieved with a blockchain. This makes blockchains potentially suitable for the recording of events, medical records, and other records management activities, such as identity management, transaction processing, documenting provenance, or food traceability.

The first blockchain was conceptualized in 2008 by an anonymous person or group known as Satoshi Nakamoto and implemented in 2009 as a core component of bitcoin where it serves as the public ledger for all transactions. The invention of the blockchain for bitcoin made it the first digital currency to solve the double spending problem without the need of a trusted authority or central server. The bitcoin design has been the inspiration for other applications.

Security: blockchain

How is the blockchain made secure?

Without making this too complicated, consider a system that only works in one direction. That system calculates the hash value that is the unique answer to a math problem based on the data contained in the block. Every time you feed the system the same data in the block, the hash value will be the same. Every change in the block results in a different hash value.

Take for example adding up the numbers in a long value like 123456789, which will result in 45. Changing the first value will have an effect on the result, but from knowing 45 alone it is impossible to figure out the value we used as input. This is the basically the same idea as blockchain, only the its hashes and input are much more complicated.

So there is no way (short of centuries of bruteforcing) to go in reverse and find the data of the block based on a hash value. This provides miners, or those who maintain the transactions in the blockchain, with a method to check the validity of a transaction without being able to create a block with false information. This is what solves the double spending problem. It makes it impossible to make up a transaction and feed the false information into the blockchain. You can not find the hash that would make that transaction look legitimate.

How new blocks are created?

Every so often a new block is created—as a set of transactions recorded over a given period of time. This block contains all the transactions that were made on the blockchain since the previous block was closed. Miners then calculate the hash value of the current block. The first one to get it right gets a reward.

Now the nodes come into play. A node is a machine that is broadcasting all the transactions across the peer-to-peer network that is the base of the blockchain. The nodes check and broadcast the hash of this proposed block until agreement is reached about the new block. Then this block will be accepted as the new starting point for the transactions in the next block. The block is saved in many different places so that no one entity has total control over it.

The transactions we mention do not have to be money transfers, as the blockchain can be used for many other applications. Consider, for example, smart contracts that can be programmed to pay the supplier when a condition has been met, such as the delivery of goods. This moves the trust in the completion of the transaction from an intermediary like a bank or a website to the blockchain.

mining on blockchain

How mining works on the blockchain?

Why would miners bother with appending to the blockchain and verifying new blocks? The “proof of work” method gives rewards to miners for calculating the hashes. So basically they get paid for the energy they put into the work. However, the proof of work method used in Bitcoin and other digital currencies is causing an energy consumption level that could run an entire country.

The number of  processing cycles needed to mine effectively has made CPU mining a thing of the past. Instead, miners moved on to GPU mining and then to ASIC, or application-specific integrated circuit, which is highly specialized and much more effective at what it does.

Although the number of Bitcoin that are given out each day as rewards stays the same over a given period of time, the number of mining farms has taken the number of cycles needed for one Bitcoin through the roof. Imagine huge server farms with racks upon racks of ASICs mining away, and that will give you a good idea of what the professional miners are doing. This is not “Joe at Home” anymore, but serious business. 

One alternative method that is in planning for the Ethereum Project is “proof of stake.” Proof of stake rewards those that have the most invested in the currency or gas (gas is the internal pricing for running a transaction or contract in Ethereum). Some fear this will turn blockchain into “the rich get richer” system, so there may be some new problems to be solved on the horizon.

Continue Reading

Internet of Things (IoT) security: what is and what should never be

We are all lucky enough to live a world full of interconnected devices, which is certainly cool and convenient because it’s so easy to keep remote things at your fingertips wherever you are. The flip side of this whole technical sophistication is that anything connected to the Internet is potentially vulnerable. Cybercriminals are busy looking for ways to compromise various smart devices and have had quite a bit of success doing it. It turns out that the Internet of Things is low-hanging fruit for threat actors. The hack scenarios below might seem like science fiction, but they are absolutely real these days.

The Internet has penetrated seemingly all technological advances today, resulting in Internet for ALL THE THINGS. What was once confined to a desktop and a phone jack is now networked and connected in multiple devices, from home heating and cooling systems like the Nest to AI companions such as Alexa. The devices can pass information through the web to anywhere in the world—server farmers, company databases, your own phone. (Exception: that one dead zone in the corner of my living room. If the robots revolt, I’m huddling there.)

This collection of inter-networked devices is what marketing folks refer to as the Internet of Things (IoT). You can’t pass a REI vest-wearing Silicon Valley executive these days without hearing about it. Why? Because the more we send our devices online to do our bidding, the more businesses can monetize them.

Unfortunately, the more devices we connect to the Internet, the more we  introduce the potential for cybercrime. Analyst firm Gartner says that by 2020, there will be more than 26 billion connected devices—excluding PCs, tablets, and smartphones. Let’s talk about the inherent risks with IoT.

IoT Cybercrime Today

Both individuals and companies using IoT are vulnerable to breach. But how vulnerable?

  • Can criminals hack your toaster and get access to your entire network?
  • Can they penetrate virtual meetings and procure a company’s proprietary data?
  • Can they spy on your kids, take control of your Jeep, or brick critical medical devices?

So far, the reality has not been far from the hype. We have seen a smart refrigerator was hacked and began sending pornographic spam while making ice cubes. Baby monitors being used to eavesdrop on and even speak to sleeping (or likely not sleeping) children. In October 2016, thousands of security cameras were hacked to create the largest-ever Distributed Denial of Service (DDoS) attack against Dyn, a provider of critical Domain Name System (DNS) services to companies like Twitter, Netflix, and CNN. And in March 2017, Wikileaks disclosed that the CIA has tools for hacking IoT devices, such as Samsung SmartTVs, to remotely record conversations in hotel or conference rooms. How long before those are commandeered for nefarious purposes?

Privacy is also a concern with IoT devices. At present, IoT attacks have been relatively scarce in frequency, likely owing to the fact that there isn’t yet huge market penetration for these devices. If just as many homes had Cortanas as have PCs, we’d be seeing plenty more action. With the rapid rise of IoT device popularity, it’s only a matter of time before cybercriminals focus their energy on taking advantage of the myriad of security and privacy loopholes.

Security and privacy issues

According to Forrester’s 2018 predictions, IoT security gaps will only grow wider. Researchers believe IoT will likely integrate with the public cloud, introducing even more potential for attack through the accessing of, processing, stealing, and leaking of personal, networked data. In addition, more money-making IoT attacks are being explored, such as cryptocurrency mining or ransomware attacks on point-of-sale machines, medical equipment, or vehicles. Imagine being held up for ransom when trying to drive home from work. “If you want us to start your car, you’ll have to pay us $300.”

Privacy and data-sharing may become even more difficult to manage. For example, how do you best protect children’s data, which is highly regulated and protected according to the Children’s Online Privacy Protection Rule (COPPA), if you’re a maker of smart toys? There are rules about which personally identifiable information can and cannot be captured and transmitted for a reason—because that information can ultimately be intercepted. Privacy concerns may also broaden to include how to protect personal data from intelligence gathering by domestic and foreign state actors. 

  • Your smart coffee machine acting up? Might be a red flag

  • Parental control systems are vulnerable, too

  • How about smart locks? Amazon Key?

  • Mobile voice assistants aren’t much safer

  • Your work computer got locked down by malware

  • Dating services are full of impostors

  • Smart home is a vulnerable home, period

  • Even Tesla car, the next big thing, is hackable

Using Uber on your iPhone can be dangerous

Apple has reportedly granted Uber the privilege to access iPhone screens even when the app is closed. This scope of permissions may expose sensitive user data to man-in-the-middle attacks. So, think twice before calling Uber if you are an iPhone user.

So where are IoT defenses? Why are they so weak?

Seeing as IoT technology is a runaway train, never going back, it’s important to take a look at what makes these devices so vulnerable. From a technical, infrastructure standpoint:

  • There’s poor or non-existent security built into the device itself. Unlike mobile phones, tablets, and desktop computers, little-to-no protections have been created for these operating systems. Why? Building security into a device can be costly, slow down development, and sometimes stand in the way of a device functioning at its ideal speed and capacity.
  • The device is directly exposed to the web because of poor network segmentation. It can act as a pivot to the internal network, opening up a backdoor to let criminals in.
  • There’s unneeded functionality left in based on generic, often Linux-derivative hardware and software development processes. Translation: Sometimes developers leave behind code or features developed in beta that are no longer relevant. Tsk, tsk. Even my kid picks up his mess when he’s done playing. (No he doesn’t. But HE SHOULD.)
  • Default credentials are often hard coded. That means you can plug in your device and go, without ever creating a unique username and password. Guess how often cyber scumbags type “1-2-3-4-5” and get the password right?

From a philosophical point of view, security has simply not been made an imperative in the development of these devices. The swift march of progress moves us along, and developers are now caught up in the tide. In order to reverse course, they’ll need to walk against the current and begin implementing security features—not just quickly but thoroughly—in order to fight off the incoming wave of attacks.

How to Protect your device?

What can regular consumers and businesses do to protect themselves in the meantime? Here’s a start:

  • Evaluate if the devices you are bringing into your network really need to be smart. (Do you need a web-enabled toaster?) It’s better to treat IoT tech as hostile by default instead of inherently trusting it with all your personal info—or allowing it access onto your network. Speaking of…
  • Segment your network. If you do want IoT devices in your home or business, separate them from networks that contain sensitive information.
  • Change the default credentials. For the love of God, please come up with a difficult password to crack. And then store it in a password manager and forget about it.

The reason why IoT devices haven’t already short-circuited the world is because a lot of devices are built on different platforms, different operating systems, and use different programming languages (most of them proprietary). So developing malware attacks for every one of those devices is unrealistic. If businesses want to make IoT a profitable model, security WILL increase out of necessity. It’s just a matter of when. Until then…gird your loins.

Continue Reading


The C-130 is a versatile and capable research platform that carries a wide variety of scientific payloads. The C-130 has a 10-hour flight endurance, a 2,900 nautical mile range at up to 27,000 ft, and a payload capacity of up to 13,000 lbs. In addition to standard thermodynamic, microphysics and radiation sensors, the C-130 has a roomy fuselage payload area (414 ft2) and many versatile inlets and optical ports. The aircraft carries instruments and sensors in pods and pylons on both wings. The C-130 can carry advanced EOL and community instrumentation.

The Lockheed C-130 “Hercules” aircraft is a four-engine, medium-size utility aircraft that has proven to be one of the most well-known and versatile aircraft ever built. The NSF/NCAR aircraft is a model EC-130Q, which is similar to the more common model C-130H model except for electrical and air-conditioning modifications. The aircraft is an all-metal, pressurized, high-wing monoplane powered by four Allison T-56- A-15 turbo-prop engines. It is equipped with dual-wheel, tricycle landing gear with the main gear wheels arranged in tandem and the nose gear arranged side-by-side. The C-130,
maintained and managed by EOL, was placed into service with the NSF in 1992.

The NSF/NCAR C-130 is ideal for studies of the middle and lower troposphere. In a typical research configuration it carries 13,000 pounds of payload with 8 to 9 hour endurance, and there is considerable flexibility in adjusting payload and range to meet specific mission requirements. It also has the capability to extend a ramp in flight (unpressurized), which allows for deployment of specialized equipment such as ocean buoys. The C-130 performs a variety of research missions at altitudes below about 26,000 feet. With its excellent low altitude performance and heavy lift capabilities, the C-130 is ideal for studies of the planetary boundary layer and lower to mid-tropospheric chemistry missions. In addition to NCAR’s standard thermodynamic, wind and turbulence, microphysics, radiation, and trace gas instruments, the C-130 has a roomy fuselage payload area that can accommodate many rack-mounted instruments with access to a number of inlets and optical ports. Several wing pods for external instrument stores of varying sizes are also available.

Watch the Video:

Continue Reading


The NSF/NCAR Gulfstream-V High-Performance Instrumented Airborne Platform for Environmental Research (GV HIAPER) aircraft is a cutting-edge observational platform that meets the scientific needs of researchers who study many different aspects of the earth's environment, such as atmospheric Chemistry and Climate, Chemical Cycles, Clouds and Aerosols, Solar and Terrestrial Radiative Fluxes, Upper Troposphere Lower Stratosphere Processes, Mountain Waves and Turbulence, Air Quality, and Mesoscale Weather.

The HIAPER aircraft is the preeminent airborne research platform for scientists and researchers in a number of disciplines. HIAPER has demonstrated success in collecting data required to meet a broad range of scientific studies and objectives including air quality and chemistry; chemical composition and transport within the atmosphere; effects of chemical process on climate change; atmospheric dynamics and thermodynamics on the synoptic and mesoscales; cloud properties and processes; atmospheric predictability; geological surveys; and electrification of the atmosphere.

In support of university-driven observational field campaigns, HIAPER is maintained and operated on behalf of the National Science Foundation by the National Center for Atmospheric Research. HIAPER is based in Broomfield, Colorado, USA and is managed by EOL’s Research Aviation Facility (RAF).


Unique Capabilities

The NSF/NCAR HIAPER is a highly-modified Gulfstream V business jet that has unique capabilities that set it apart from other research aircraft. It can reach 51,000 feet (15,500 meters), enabling scientists to collect data from near the earth’s surface to the tops of storms and to the lower edge of the stratosphere. With a range of about 7,000 miles (11,265 kilometers), it can reach many remote locations, allowing for sampling from the North Pole to the South Pole.

Such attributes, plus the ability to carry 5,600 pounds (2,540 kilograms) of state-of-the-art sensors, mean that HIAPER will be on the forefront of scientific discovery. The aircraft modifications allows for sampling using many instruments mounted in the cabin and in custom-built wing pods. Using air intakes, the aircraft enables researchers to study pivotal chemical processes high above Earth that affect global temperatures. As a flying laboratory, scientists are able to probe the upper edges of hurricanes and thunderstorms in unprecedented detail, determining the dynamics that drive these powerful storms, and thus providing valuable data used for earlier prediction such storms.

Scientific Capabilities

The flight characteristics of the aircraft, plus the ability to carry 5,600 pounds (2,540 kilograms) of sensors, makes the HIAPER GV a versatile airborne laboratory for scientific discovery. Scientists are able to bring a whole suite of instruments to the upper edges of hurricanes, thunderstorms, and other storms, offering unprecedented new details for studying these powerful storms. The aircraft enables researchers to study critical chemical processes from the Earth’s surface to the stratosphere, often in
remote locations. These types of data are often essential for understanding environmental changes, for example, from air pollution. The HIAPER GV has also supported remote sensing measurements in remote locations that played an important role in calibration and validation of satellite instruments


Each HIAPER GV payload is customized to meet the scientific objectives and research goals of a specific mission. NCAR, in conjunction with university groups and private industry, have developed and maintained a suite of highly-capable airborne instruments known as the HIAPER Airborne Instrumentation Solicitation (HAIS). In addition to the HAIS instrumentation, NCAR offers in-situ, remote sensing, and expendable instruments to be deployed from HIAPER. Typical payloads for scientific missions include a combination of these instruments with other instruments provided and operated by investigators from universities, other government organizations and private companies. These instruments must comply with mechanical, structural, electrical, and flammability requirements. NCAR works closely with instrument investigators to assist with payload certification and integration, and NCAR maintains a Design and Fabrication Services facility that is capable of manufacturing airborne instruments and interface hardware.


The NSF/NCAR HIAPER GV is available on a competitive basis to all qualified scientists from universities, NCAR, and other U.S. government agencies requiring the aircraft and associated supporting services to carry out their research objectives in support of NSF programs. The deployment of the HIAPER GV, one of the NSF Lower Atmosphere Observing Facilities, is driven by the NSF peer review process, the capabilities of a specific platform to carry out the proposed observations, and the scheduling of the facility for the requested time.


RAF Aircraft Instrumentation:

Project Planning Charts:

HIAPER Investigator’s Handbook:

Continue Reading