In today's connected embedded device market, driven by the Internet of things (IoT), a large share of devices in development are based on Linux of one form or another. The prevalence of low-cost boards with ready-made Linux distributions is a key driver in this. Acquiring hardware, building your custom code, connecting the devices to other hardware peripherals and the internet as well as device management using commercial cloud providers has never been easier. A developer or development team can quickly prototype a new application and get the devices in the hands of potential users. This is a good thing and results in many interesting new applications, as well as many questionable ones.
When planning a system design for beyond the prototyping phase, things get a little more complex. In this post, we want to consider mechanisms for developing and maintaining your base operating system (OS) image. There are many tools to help with this but we won't be discussing individual tools; of interest here is the underlying model for maintaining and enhancing this image and how it will make your life better or worse.
Hobbyist and maker projects primarily use the Centralized Golden Master method of creating and maintaining application images. This fact gives this model the benefit of speed and familiarity, allowing developers to quickly set up such a system and get it running. The speed comes from the fact that many device manufacturers provide canned images for their off-the-shelf hardware. For example, boards from such families as the BeagleBone and Raspberry Pi offer ready-to-use OS images and flashing. Relying on these images means having your system up and running in just a few mouse clicks. The familiarity is due to the fact that these images are generally based on a desktop distro many device developers have already used, such as Debian. Years of using Linux can then directly transfer to the embedded design, including the fact that the packaging utilities remain largely the same, and it is simple for designers to get the extra software packages they need.
The final issue that arises with this development model is reliance on third parties. If the hardware vendor's image changes don't work for your design, you may need to invest significant time to adapt. To make matters even more complicated, as mentioned before, the hardware vendors often based their images on an upstream project such as Debian or Ubuntu. This situation introduces even more third parties who can affect your design.
This method of creating and maintaining an image for your application relies on the generation of target images separate from the target hardware. The developer workflow here is similar to standard software development using an SCM system; the image is fully buildable by tooling and each developer can work independently. Changes to the system are made via edits to metadata files (scripting, recipes, configuration files, etc) and then the tooling is rerun to generate an updated image. These metadata files are then managed using an SCM system. Individual developers can merge the latest changes into their working copies to produce their development images. In this case, no golden master image is needed and developers can avoid the associated bottleneck.
Working in this fashion allows the size of your development team to increase without reducing productivity of individual developers. All engineers can work independently of the others. Additionally, this build setup ensures that your builds can be reproduced. Using standard SCM workflows can ensure that, at any future time, you can regenerate a specific build allowing for long term maintenance, even if upstream providers are no longer available. Similar to working with distributed SCM tools however, there is additional policy that needs to be in place to enable reproducible, release candidate images. Individual developers have their own copies of the source and can build their own test images but for a proper release engineering effort, development teams will need to establish merging and branching standards and ensure that all changes targeted for release eventually get merged into a well-defined branch. Many upstream projects already have well-defined processes for this kind of release strategy (for instance, using *-stable and *-next branches).
To be clear, the distributed model does suffer some of the same issues as mentioned for the Golden Master Model; especially the reliance on third parties. This is a consequence of using systems designed by others and cannot be completely avoided unless you choose a completely roll-your-own approach which comes with a significant cost in development and maintenance.
For general production use, the benefits in terms of team size scalability, image reproducibility and developer productivity greatly outweigh the learning curve and overhead of systems implementing the distributed model. Support from board and chip vendors is also widely available in these systems reducing the upfront costs of developing with them. For your next product, I strongly recommend starting the design with a serious consideration of the model being used to generate the base OS image. If you choose to prototype with the golden master model with the intention of migrating to the distributed model, make sure to build sufficient time in your schedule for this effort; the estimates will vary widely depending on the specific tooling you choose as well as the scope of the requirements and the out-of-the-box availability of software packages your code relies on.
With our team of experts, we have made it possible for companies to start their IoT project quickly, inexpensively and deploy it on larger volumes easily. Moreover, beyond the IoT platform, we offer you a partnership to create together solutions that will help your teams go faster, further and make you money.
Because PaaS delivers all standard development tools through the GUI online interface, developers can log in from anywhere to collaborate on projects, test new applications, or roll out completed products. Applications are designed and developed right in the PaaS using middleware. With streamlined workflows, multiple development and operations teams can work on the same project simultaneously.
As all good maker projects begin, I started by breadboarding out a conceptual circuit. This involves identifying the functions you want your device to have and what components you will use. I wanted my device to:
If individual services cannot or should not be implemented in the cloud, we can also help our customers in developing projects with hybrid solutions using Azure Edge (IOT) or Azure Stack (Hybrid-Cloud).
Expedite your IoT and edge computing development with the "Barracuda App Server Network Library", a compact client/server multi-protocol stack and toolkit with an efficient integrated scripting engine. Includes Industrial Protocols, MQTT client, SMQ broker, WebSocket client & server, REST, AJAX, XML, and more. The Barracuda App Server is a programmable, secure, and intelligent IoT toolkit that fits a wide range of hardware options.
SMQ lets developers quickly and inexpensively deliver world-class management functionality for their products. SMQ is an enterprise ready IoT protocol that enables easier control and management of products on a massive scale.
The problem with all these technologies was that the software developer was writing the code of the application, so the burden of the UI was also on that software developer. If you were writing something for Windows, then your software developer was writing a UI in .NET or win32. First, this reduces your ability to develop your product in parallel. The more developers you add to a project, the more complex it becomes and the more overall time it takes to develop.
Next time you are resourcing a development project, consider a web stack for developing your UI. It has the potential to reduce the amount of code that needs to be written, shorten development time and speed time to market, and improve the quality of your product. It may be the perfect win-win-win for your engineers, company and customers.
In many ways, IoT data orchestration will simplify IoT infrastructure in the same way that cloud services simplified IT infrastructure. Virtualization, exponential improvements in processing and network capacity, and other technological innovations enabled companies like Amazon and Microsoft to deliver enterprises cloud services that were extremely easy to deploy, use and scale. With storage, networking and computing delivered to them as a single, integrated service, the cloud allowed enterprises to avoid dealing with many of the complexities related to IT infrastructure. This dramatically simplified IT, delivering enterprises the agility they needed to quickly and inexpensively develop new web applications, and then rapidly and cost-effectively scale these applications, digitally transforming entire markets.
The most important attribute of the Internet also may be the most obvious: it can transmit information quickly, conveniently, and inexpensively. Routine transactions, including making payments, processing and transmitting financial information, and maintaining records, can be handled less expensively with web-based technology. Using Internet technology, many firms, especially those in data-intensive industries such as financial services and medical care, can reduce their cost of production.
The overall goal of this design thinking course is to help you design better products, services, processes, strategies, spaces, architecture, and experiences. Design thinking helps you and your team develop practical and innovative solutions for your problems. It is a human-focused, prototype-driven, innovative design process. Through this course, you will develop a solid understanding of the fundamental phases and methods in design thinking, and you will learn how to implement your newfound knowledge in your professional work life. We will give you lots of examples; we will go into case studies, videos, and other useful material, all of which will help you dive further into design thinking. 2b1af7f3a8