Make the most of your content
Systems as diverse as maritime vessels, aircraft, and computer networks share one important thing in common: they are all highly complex. A big part of the complexity is found in the volume and variety of information resources associated with them. In order for these systems to remain operational and affordable, organizations need a scalable and robust way to manage, exchange and publish system information.
This challenge is one of the reasons that content technologies exist in the first place. This challenge cannot be solved by working harder or buying bigger computers. This challenge demands that the content inside the information be managed "as content", as something that will be reused across many different information events and consumed in many different ways. A solution to this challenge depends on leveraging, integrating, and managing digital content solutions so that organizations can evolve these complex systems, and the information about those systems, over very long periods of time and amid dramatic changes in the technology and business landscape.
Organizations of every shape and size are challenged by the fact that finding useful information is hard and only seems to get harder as time advances. Heavy investments on expensive search technologies and ponderous enterprise content management systems have not helped. If anything these investments, by pooling ever larger collections of information together, only seem to make things worse.
Organizations in the commercial, non-profit, and government sectors, have all found that it is possible to deploy information discovery solutions that cut through the fog of information to help users quickly find what is relevant to specific business needs. These solutions leverage selected parts of the information content to collect and foreground descriptive details about important information resources. Once isolated, these details can presented through a simplified discovery service that works in part because it offers users a less congested, and more focused, way to interact with large volumes of information.
Although it is an activity that has occurred several times since the early days of the web, the meteoric rise of mobile devices and social media has forced many organizations to take a hard look at their online presence. In doing so, these organizations find that they have not only become out-of-step with prevailing design practices and accessibility standards but that their online presence is cluttered with mountains of unused and disjointed information.
A solution to this particular challenge is anything but easy. In part this is because the challenge has emerged not so much from inattention as from the collision of too many leading edge design innovations. Practical solutions to this challenge must do several things: the data sources underlying the experiences need to be reintegrated, large volumes of information need to be retionalized as managed digital content, and a process must be introduced that cycles all improvements through regular testing that engage with users and in particular with the tasks those users are trying to perform. Many disciplines need to be brought together to form a solution, and coordinated much more effectively than in the past, and content technologies is one of them.
Organizations in a range of industries fall under varying degrees of regulation. These organizations are subject to specific requirements largely around how they handle information and how they use it to conduct their operations. In more formal scenarios, these organizations undergo stringent filing and review processes and must operate in a way that conforms to strict compliance guidelines. Needless to say the cost of achieving and maintaining compliance can be high both financially and operationally.
As is seen in healthcare, defense, energy and aerospace sectors, and increasingly in other sectors such as diverse as finance, social media, security, and agriculture, complying with evoling regulatory regimes demands a new level of precision in the management and utilization of information content. In essense, organizations can only address this challenge by managing the content that underlies their information resources. If this is done, through the judicious deployment of content technologies, it becomes possible to improve the operational efficiency of all parties in the regulatory process so that savings can be realized, in terms of funds and time, by everyone. And in each case, the collaboration amongst the parties can find ways to move beyond saving time and money to improve the overall level of compliance being achieved and realize the goals that gave rise to the regulations in the first place.
Large providers of educational services and information are confronted with a common problem. How do they create and manage a formal body of curricular material so that administrators, instructors, customers and learners can easily access the information they need? And how can these stakeholders actively participate in the ongoing evolution of the curriculum? How can the outcomes of learning activities be efficiently leveraged in this process? The explosive growth of eLearning tools and techniques has made a solution to this challenge more daunting and more pressing.
Solutions to this challenge have proven elusive as learning groups in organizations are highly attuned to the learning experiences they are delivering, whether in the classroom or online. This complicates the realization of solutions as these stakeholders have historically had a very hard time thinking about learning content separately from the learning experiences being engendered. And the learning technologies typically used amplify this tendency. Real solutions to this challenge depend on the design and management of learning content in a way that exists outside any one learning experience. The content solutions that have been successful in this area, and there have been several, succeeded by reorganizing learning content around a network of learning objectives so that all learning resources needed to achieve specific goals can be dynamically assembled, deployed, and used in the ways that make sense for the participants.
Software developers deploy a range of tools and techniques in order to manage the complexity of the software design and development process. This is always a challenge but for some software teams it is a challenge that blocks their innovation pathway. One example of this type of challenge is the need for a broad range of stakeholders to contribute to the design of software. Another is the need for software to adapt dynamically to different environments. To meet these challenges, radically new approaches to software design are often needed.
One global leader in optical networking called upon a radically different approach to designing and generating software and this approach stands as an exemplar to others facing similar challenges. The goal in this case was to equip customers with the ability to completely reconfigure a network to meet their own requirements and to do so in near real-time. The networking company had pushed the state-of-the-art software design tools to the breaking point but could proceed no further. The solution lay in giving software designers, with this including customers, the ability to author extremely precise design specifications as digital content assets. Around these assets the stakeholders could collaborate and agree upon the design rules that would apply in different circumstances. The run-time software artifacts would then be generated directly from these software design assets. In this solution, the designer wrote digital content and the digital content wrote the software code.
Organizations that provide products also need to provide end user documentation about those products. In many industries, it is a regulatory requirement that they do so and in the language of each market in which they are providing their product. This is a challenge for a number of reasons. For one, the skills and focus of the teams creating the products is very different than that of the professional communicators focused on preparing the documentation. For another, product update lifecycles have been shortening with this placing increased strain on the relationship between product development and product documentation. Complicating the situation, management frequently only sees the revenue associated with the product and therefore they do not appreciate the value of the documentation that supports those products.
Digital content technologies are an inescapable part of any solution to this challenge. Product documentation has no choice but to become fully digital so that it can be ever more closely integrated into the product lifecycle while still meeting its fundamental goals of being clear and usable to all product users. Open content standards such as the Darwin Information Typing Architecture (DITA) have arisen as the international community of professional communicators have worked together to share their respective best practices in addressing this challenge. Resources such as DITA, and the associated community of practice, have proven to be extremely valuable when constructing affordable and compelling product documentation content solutions.
Many organizations have been exploring how to apply the principles of library science to their information holdings with the view to making that information more findable and useful. They quickly discover that this effort demands a great deal of investment. Then these outlays grow as they look to scale their efforts to an enterprise scale, where they must leverage emergent technologies (AI/NLP). And finally, everything is complicated by the ever-present pressure to deliver concrete business benefits and usually on a comically short time scale.
A key part to a solution to this challenge is introducing a solution for taxonomy management. There are commercial products that have arisen in this space but there is also the option of introducing a content solution. A content solution approach can be attractive for a number of reasons — not least of which is affordability. Essentially, these solutions revert to a basic notion that a taxonomy is first and foremost a document that clearly describes a perspective on a domain of knowledge. If a taxonomy is created and managed as high quality digital content, then it can be extended by linking to other taxonomies, it can be closely integrated into other enterprise systems and processes to ensure it doesn't become yet another specialized silo, it can be published into multiple forms that support intelligent discovery processes, and it can be leveraged to effectively inform and guide automated information classification systems.
The drive in many organizations is towards increasingly standardized work procedures. Sometimes this is undertaken to achieve specific types of quality certification. More often it is undertaken to achieve heightened targets for quality, efficiency and sustainability as have been set as part of enterprise-wide lean initiatives. And in today’s increasingly automated work environments, these procedures must be something that both people and machines can understand and act upon in real-time.
This challenge can only really be addressed through the deployment of an integrated content solution that handles standardized work procedures as digital content and that integrates the lifecycle of these assets into the many different enterprise systems that must interact in order for the organization to do what it does. A content solution for smart procedures must introduce application components that support authoring and review activities, publishing processes, and the interpretation and application of these instructions by both people and machines. Solutions of this type have been recognized internationally with awards for leadership in lean manufacturing innovation.
Many organizations initiate, manage and complete cases. Accordingly, there are a great many case management tools on the market. What is less visible is the fact that the lifecycle of any case will almost never occur within the confines of any one of these tools. More commonly, case files need to be shared with other organizations and sometimes with many other organizations. Also, the information collected around cases becomes a record that must be accessible for long periods of time and sometimes forever. Many of these requirements are simply not addressed, or even considered, by mainstream case management tools.
It is through the deployment of content technologies that a complete case management solution becomes possible. Within such a solution, there is a role to be played by commercial case management systems. Critically such a solution must facilitate the interoperation of many commercial case management systems. The records within cases, as well as the design information about the full case management lifecycle, all become digital content assets that can move between different systems, between different format representations appropriate to different uses, and ultimately between the working case management system and a persistent, long-term archival repository. While frequently unpopular with individual case management system vendors, whose interests lie elsewhere, organizations adopting a content-driven approach to case management have realized substantial benefits that have continued over years and sometimes decades.
The engineering design activity, undertaken in many sectors ranging from pharmaceuticals through to aerospace, is an even more complex and challenging task than it already appears. One factor often missed is how engineers interact with a large body of engineering standards that apply constraints to almost all design decisions. Improving how engineers access and use engineering standards, and participate in their ongoing evolution, is a highly sought after, but daunting, goal.
The solution to this challenge lies in modernizing how the engineering standards themselves are maintained and made available for use within design environments. If these standards can be converted from massive bookshelves of loose-leaf binders to being data resources that can be precisely searched and directly reused, the engineering effort can be given a dramatic boost in productivity. Establishing a persistent connection between the engineering standards that apply and the design decisions being made, including where necessary deviations, permits the efficient application of changes and insights from outcomes back to sources and vice versa. The resulting "authority network" fundamentally improves the engineering lifecycle in large part by making the essence of engineering work explicitly digital and therefore more agile.
Field engineers and equipment support technicians have the unenviable task of arriving at a customer’s site at a time when a problem with a critical system has disrupted, and sometimes halted, operations. Whether it is a large printer, an aircraft, or a mining tool, there will be people anxious to see the problem fixed and fixed quickly. How quickly these problems will be fixed will largely depend on how complete, up-to-date, and easy-to-use is the maintenance information these field engineers and technicians have at their fingertips. Improving this information is something that is always welcomed.
Over the last 30 years, intelligent maintenance aids have arisen as an answer to this challenge in the defense, aerospace, energy, healthcare, and construction sectors. The most attractive solutions, and the ones that have proven to be the most adaptable in the face of changing requirements, have been those that leverage web-based interfaces that are low-cost and mobile-friendly, and that facilitate the provision of a very wide range of technical support information to users working in many different environments. These intelligent maintenance aids are only as good as the content solution behind them — because the rely, entirely, on the managed accuracy of the information resources being provided in response to every request for guidance. When done properly, these decision support tools integrate all of the content assets that are relevant to a given maintenance situation so that the field engineer can be efficient and effective on the ground.
Commercial publishers as well as large enterprises have come to see that they increasingly need a dynamic publishing capability. Such a capability will allow them to deliver content that is packaged for every user’s individual needs and formatted to perform optimally on whatever device the user happens to be using at a given time. The explosion in eBook readers and mobile devices has thrown a spotlight on the fact that most publishers and most enterprises are limited by legacy publishing infrastructures that struggle to produce two different versions of their content let alone an unlimited array of renditions.
Equipping organizations, whether commercial publishers or large enterprises, with a dynamic publishing capability calls for two things. One is that their publishing infrastructures typically need end-to-end modernization. The other is that they ultimately need to get back to the basic question of how their content is prepared and managed so that it can actually fuel a dynamic publishing process. With the right backend content solution in place, it becomes possible for these organizations to offer a digital content store that has been optimized to response to user requests for information. These content stores can be exposed through a content API (application programming interface) that lets external applications call for, retrieve, and render information resources on-the-fly.
There is much talk about the innovation economy, and about how leading companies are repeatedly disrupting traditional industries with game-changing innovations that combine new technologies with new service models. These innovators themselves face the challenge of managing their innovation processes, and of managing the information resources that make it possible to acquire, share, combine, unbundle, license, and sell innovations. Market leaders planning to stay in front need to support their creative activities with information services that make those activities manageable and scalable. And one of the things that innovators need to ask themselves is whether or not they genuinely understand, and therefore own, their innovations. Many do not have a good answer to that question because their innovations are only partially documented and rarely in a form that can be accessed, modified, and used in the latest technology platforms.
This challenge boils down to helping organizations to support teams in creating, sharing, managing, and leveraging the content that describes their innovations and that constitutes what they actually retain once the teams disband and once the original innovation has been deployed. These organizations have an interest in continuously formalizing their innovation processes while avoiding the dangers of smothering the dynamic flow of team creativity that is so central to successful innovation. A content solution approach to this problem seeks to help teams to find, modify, create, analyze, and test innovations by allowing them to collaborate on the digital content that describes those innovations — thereby opening up the possibilities for how these innovations can be leveraged.
All institutions buy goods and services. Large organizations can easily have annual procurement budgets that run into the billions. And the lion’s share of these procurement budgets (typically well over 80%) are directed to the acquisition of complex solutions that are customized to their unique needs. Whether buildings, ships, or software systems, the efficiency and effectiveness of the procurement process ultimately comes down to how completely and clearly the organization articulates its requirements and how usefully the supplier prepares the support information that comes with the delivered solution.
Whereas many organizations have sought to save money purely by shaving small amounts from their high volume transactions (typically representing only 20% of their overall expenditures), leading organizations have looked more deeply at the procurement process to find much greater savings in how they acquire complex solutions. These savings are to be found in fundamental improvements in how the information challenges surrounding procurement are addressed. If the information about a desired solution, and about how it will be used, can be created and managed in a way that can gracefully adapt over time and that can be intelligently leveraged throughout the system lifecycle then everything changes — for the better. This is how genuine procurement reform happens.
Many business activities depend upon information aggregated from many sources. Some of these activities are large-scale, such as seen with a commercial publisher aggregating sources into a reference portal or a prime contractor pulling together the contributions of thousands of suppliers. Some of these are very small but important nonetheless, such as when an individual accesses a government website looking for information on how to do something that will invariably cross a number of jurisdictional boundaries. Creating an integrated view of information from sources that were never designed to be consolidated is a challenging task to put it mildly.
Information aggregators from around the world, and of every size and shape, have found ways to tackle the challenges of bringing information resources together. They have learned that the best way to smooth over the gaps and overlaps in aggregated information is to create a layer of metadata and descriptive content that helps users to focus and organize their searches in meaningful ways. It is vital that this approach automate how this layer is created and maintained, and here recent advances in cloud artificial intelligence (AI) services has been a major boon. It is also vital that the discovery services being provided provide more than just enhanced metadata about the information resources. Users typically need more than that — they need to be able to assess the resource once it has been discovered; they need to be able to understand more about it than a few category tags will tell them. They need access to content. Organizations in the automotive, finance, healthcare, government, academic, and publishing sectors have all benefitted from being able to aggregate information using digital content technologies to provide this type of focused discovery experience.
Although it has been growing for over thirty years, there has been a lot of attention gathering around digital humanities, where computing technology is deployed to assist and advance traditional humanities scholarship. Initially confined to the academy, digital humanities projects have been recognized as being very close to the types of research activities that are essential in today's interconnected world. What makes these projects interesting is the fact that they confront the complexities of text and media head-on and do so with some of the most challenging scenarios imaginable.
Digital humanities projects call upon a lot of technologies including those centered on how we create, manage, and leverage content assets. For this reason, Gnostyx became involved in a digital humanities project at the University of Oxford. The Digital Miscellanies Index (DMI) was constructed to give researchers around the world the ability to access and explore data about the composition of poetry miscellanies (collections) and their publishing history in 18th century England. Among the lessons learned from this project, and one that is immediately applicable in all industries, is that it is very difficult to know beforehand what metadata will emerge as important. This fact poses serious challenges to traditional technology design practices and it was for this reason Gnostyx had to introduce more flexible content technologies so that key DMI project milestones could be achieved while accommodating an unexpectedly wide range of metadata elements.
For decades in the past, and probably for decades to come, organizations have sought cost-effective ways to convert legacy content from single-purpose formats to open, digital representations that can be used in many ways. This activity has proven difficult and expensive for a number of reasons. One is that legacy content formats can be stubbornly obscure. Another, and a more important one, is that in order to be fully useful in a new world, the content needs to be enriched and improved at the same time as it is converted. And this effort relies on the insight of subject matter experts who understand the domain but whose time, and patience, is often limited.
In numerous industry sectors, there have arisen best practices and solution patterns for addressing this challenge. One of these solution patterns is the use of interactive interfaces whereby subject matter experts provide input to, and feedback on, conversion processes. This allows the conversion rules to be adapted based on accumulating experience. Another pattern is the deployment of aggressive content analysis and validation processes that can be used to push data about the conversion to stakeholders who can, again, provide corrective feedback. In many domains, a key hurdle to be overcome is guaranteeing the parity of converted content with the original source, and here too there are several design patterns that have proven consistently useful.
Although its history is longer than is often assumed, the practice of Content Strategy has gained notoriety over the last ten to fifteen years. More and more organizations have realized that they need to think about their content more carefully if they truly want to improve the online user experience they are providing. Content Strategy looks at what content an organization needs to provide to users, whether prospects, partners, or customers, in order to materially improve the organization's business performance. As a practice and as an undertaking on individual projects, Content Strategy is a challenge due to its fundamentally interdisciplinary nature, bridging as it does different business areas, communication practices, and technology domains.
From a long history that extends back over thirty years, several important lessons about Content Strategy can been extracted. One is that a Content Strategy is a plan of action and as such it should be advanced in a way that pushes an organization to act – to make improvements in how they acquire, manage, publish, evolve, and leverage their content assets. Another lesson is that content cannot be productively considered separately from the technology domain within which it lives. Yet another is that when pursued in an active and integrated manner, Content Strategy can be a powerful catalyst for organizational change because it helps to focus all attention on key external stakeholders — namely prospects, customers, and partners.
A common refrain, heard in many a meeting room, is that organizations need to understand and articulate their business requirements first and only after these requirements are nailed down should they look at the available, or possible, technologies. There is a serious problem with this edict in that technology has a habit of changing the business landscape and that as soon as a new technology is introduced the business requirements tend to change and sometimes radically. How can an organization explore the business impact of technology without committing itself to a particular technology investment?
A good response to this challenge exists. It’s called Information Prototyping. Essentially low-cost, open-source web technologies, coupled with digital content technologies, are leveraged to inexpensively explore potentially new information experiences for users and new ways of conducting business for the organization. In this way an organization can begin to encounter the business impacts of technology, and of new ways of engaging customers, so that their understanding of the business requirements catches up with the possibilities provided by the new technologies. And they can test their original business assumptions against the capabilities of available, and affordable, technologies, and against the needs of their users. This is one of the areas where content technologies can offer organizations the most benefits even though this fact is not widely appreciated.