Table of contents
  1. Story
  2. Slides
  3. Spotfire Dashboard
  4. Research Notes
  5. Developing Ontologies - Data Engineering
    1. Use-Case Centered Development Process
    2. Dr. Jens Pohl's Ontology and Software Development Process
      1. 1. Purpose
      2. 2. Basic Point
      3. 3. Background
      4. 4. Overview
      5. 5. Phase 1, Create Ontology
      6. 6. Phase 2, Create Application Engine
      7. 7. Phase 3, First Prototype
      8. 8. Phase 4, Second Prototype
      9. 9. Phase 5, Third Prototype
      10. 10. Phases 6 and Beyond, Successive Prototypes
      11. 11. Model Driven Architecture (MDA)
      12. 12. FY 10 National Defense Authorization Act (NDAA), Paragraph 804
    3. Ontology Development Principles
      1. Introduction
      2. Methodology for Developing an Ontology
        1. Step (1) - Identifying the purpose
        2. Step (2) - Building the ontology
        3. Step (3) - Coding the ontology
        4. Step (4) - Integrating with existing ontologies
        5. Step (5) - Evaluation
        6. Step (6) - Documentation
      3. References
    4. ICODES and Ontology Development
      1. 1. Purpose
      2. 2. Basic Points
      3. 3. Background
      4. 4. Discussion
      5. 5. Documents on Concepts and Methods
  6. DoD Standardization of Military and Associated Terminology
    1. 1. Purpose
    2. 2. Applicability
    3. 3. Policy
    4. 4. Responsibilities.
    5. 5. Releasability Unlimited
    6. 6. Effective Date
    7. Enclosures
      1. Enclosure 1
      2. Enclosure 2
  7. CJCS Standardization of Military and Associated Terminology
  8. Joint Doctrine Development System
  9. Joint Doctrine Development Process
  10. Marine Corps Planning Process
  11. GFM DI Implementation: Unique Identification (UID) for GFM Volume 1
  12. GFM DI Implementation: The Organizational and Force Structure Construct Volume 2
  13. Organizational and Force Structure Construct (OFSC) for Global Force Management (GFM)
  14. Slides from GFM DI Briefing
    1. Title Slide
    2. Problem - Solution
  15. Data Engineering for D-Day
    1. Cartoon from the "New Yorker"
    2. Core Memo-002 Gavin Quotation

Data Science for DoD Ontology

Last modified
Table of contents
  1. Story
  2. Slides
  3. Spotfire Dashboard
  4. Research Notes
  5. Developing Ontologies - Data Engineering
    1. Use-Case Centered Development Process
    2. Dr. Jens Pohl's Ontology and Software Development Process
      1. 1. Purpose
      2. 2. Basic Point
      3. 3. Background
      4. 4. Overview
      5. 5. Phase 1, Create Ontology
      6. 6. Phase 2, Create Application Engine
      7. 7. Phase 3, First Prototype
      8. 8. Phase 4, Second Prototype
      9. 9. Phase 5, Third Prototype
      10. 10. Phases 6 and Beyond, Successive Prototypes
      11. 11. Model Driven Architecture (MDA)
      12. 12. FY 10 National Defense Authorization Act (NDAA), Paragraph 804
    3. Ontology Development Principles
      1. Introduction
      2. Methodology for Developing an Ontology
        1. Step (1) - Identifying the purpose
        2. Step (2) - Building the ontology
        3. Step (3) - Coding the ontology
        4. Step (4) - Integrating with existing ontologies
        5. Step (5) - Evaluation
        6. Step (6) - Documentation
      3. References
    4. ICODES and Ontology Development
      1. 1. Purpose
      2. 2. Basic Points
      3. 3. Background
      4. 4. Discussion
      5. 5. Documents on Concepts and Methods
  6. DoD Standardization of Military and Associated Terminology
    1. 1. Purpose
    2. 2. Applicability
    3. 3. Policy
    4. 4. Responsibilities.
    5. 5. Releasability Unlimited
    6. 6. Effective Date
    7. Enclosures
      1. Enclosure 1
      2. Enclosure 2
  7. CJCS Standardization of Military and Associated Terminology
  8. Joint Doctrine Development System
  9. Joint Doctrine Development Process
  10. Marine Corps Planning Process
  11. GFM DI Implementation: Unique Identification (UID) for GFM Volume 1
  12. GFM DI Implementation: The Organizational and Force Structure Construct Volume 2
  13. Organizational and Force Structure Construct (OFSC) for Global Force Management (GFM)
  14. Slides from GFM DI Briefing
    1. Title Slide
    2. Problem - Solution
  15. Data Engineering for D-Day
    1. Cartoon from the "New Yorker"
    2. Core Memo-002 Gavin Quotation

  1. Story
  2. Slides
  3. Spotfire Dashboard
  4. Research Notes
  5. Developing Ontologies - Data Engineering
    1. Use-Case Centered Development Process
    2. Dr. Jens Pohl's Ontology and Software Development Process
      1. 1. Purpose
      2. 2. Basic Point
      3. 3. Background
      4. 4. Overview
      5. 5. Phase 1, Create Ontology
      6. 6. Phase 2, Create Application Engine
      7. 7. Phase 3, First Prototype
      8. 8. Phase 4, Second Prototype
      9. 9. Phase 5, Third Prototype
      10. 10. Phases 6 and Beyond, Successive Prototypes
      11. 11. Model Driven Architecture (MDA)
      12. 12. FY 10 National Defense Authorization Act (NDAA), Paragraph 804
    3. Ontology Development Principles
      1. Introduction
      2. Methodology for Developing an Ontology
        1. Step (1) - Identifying the purpose
        2. Step (2) - Building the ontology
        3. Step (3) - Coding the ontology
        4. Step (4) - Integrating with existing ontologies
        5. Step (5) - Evaluation
        6. Step (6) - Documentation
      3. References
    4. ICODES and Ontology Development
      1. 1. Purpose
      2. 2. Basic Points
      3. 3. Background
      4. 4. Discussion
      5. 5. Documents on Concepts and Methods
  6. DoD Standardization of Military and Associated Terminology
    1. 1. Purpose
    2. 2. Applicability
    3. 3. Policy
    4. 4. Responsibilities.
    5. 5. Releasability Unlimited
    6. 6. Effective Date
    7. Enclosures
      1. Enclosure 1
      2. Enclosure 2
  7. CJCS Standardization of Military and Associated Terminology
  8. Joint Doctrine Development System
  9. Joint Doctrine Development Process
  10. Marine Corps Planning Process
  11. GFM DI Implementation: Unique Identification (UID) for GFM Volume 1
  12. GFM DI Implementation: The Organizational and Force Structure Construct Volume 2
  13. Organizational and Force Structure Construct (OFSC) for Global Force Management (GFM)
  14. Slides from GFM DI Briefing
    1. Title Slide
    2. Problem - Solution
  15. Data Engineering for D-Day
    1. Cartoon from the "New Yorker"
    2. Core Memo-002 Gavin Quotation

Story

Data Engineering for DoD Ontology

CorePlanningMemoCartoon.png

I asked Peter Morosoff of E-MAPS to provide information on Data Engineering for DoD Ontology and he provided the following:

  • General Gavins brief description of his data engineering as a first step in preparing for D-Day and a relevant cartoon from the "New Yorker." See above.
  • A meeting with Jens Pohl that would cover (1) the processes and methods developed for building ontology-based information tools and (2) how to use this information to influence senior government officials.
  • The author (Mabel_E._Echols@omb.eop.gov) and link to the OMB Memo on ontology.
  • Issuances from the Office of the Secretary of Defense on Global Force Management (GFM) and GFM Data Initiative (GFM-DI).  The GFM data effort is basically an exercise in ontology. The GFM initiative was championed by the former Vice Chairman of the Chiefs of Staff, General Cartwright while he was the Joint Staff J-8 because he was having difficulty getting information he needed for important decisions.​
  • A slide from a briefing was delivered to the Deputy's Advisory Working Group (DAWG).
  • A document that provides a (1) repeatable process and (2) structure that warfighters use to build an "ontology" for a planned operation.  Since the value of a plan is how good a foundation it provides for adjusting to the inevitable changes and unexpected developments that arise during the execution of any activity, an operation plan is not so much a list of actions that are to be executed (although an operations order includes that) but a description of the force to be employed and many, many relationships between units and units, units and terrain, units and missions, missions and tasks, etc. Global Force Management, by the way, is intended to make many of relationships machine usable (e.g., as a bases for machine inferencing).  I thought it would, therefore, be prudent for me to go over this information because I believe you can exploit what the warfighters have done and do in your discussions about data engineering.
  • Four policy and procedure documents out of OSD and the Chairman of the Joint Chiefs of Staff on terminology (DoD and CJCS) and the development of "people-friendly" ontologies in the form of doctrinal manuals for joint warfare (Development System and Process). The DoD IT community has nothing as neat as this. Look at http://www.dtic.mil/doctrine/doctrine/status.pdf (PDF) for a display of all the joint publications (JP) on one image. For each, there is a link from the slide to the document's icon on the slide.

​​JointDoctrineHierarchy.png

This is an amazing response that requires some time to digest, so I decided to build a knowledge base of these documents so I could find and reuse their contents. This could be applied to all of the documents in the Joint Doctrine Hierarchy shown above. That would be a DoD Ontology of Joint Doctrine Publications! It would be another example of Data Science for Data Publications that we are doing in the Federal Big Data Working Group Meetup!

Peter Morosoff also sent the WebEx of ICODES as follows:

This statement caught my eye: Dr. Pohl and his associates use MDA tools and approach because of the complexity of the ontologies and other software products they develop.  With MDA, revisions to ontologies, applications engines, interfaces, etc., are determined by first creating or revising the data in the MDA model and then processing that model to produce the application engines, interfaces, etc. 

I wonder if Dr. Pohl is aware of Be Informed?

MORE TO FOLLOW FROM MEETING

Slides

Spotfire Dashboard

Research Notes

Ontology and Ontologizing – Essential Elements in the Link between Health Data and Value
http://www.whitehouse.gov/sites/defa...8_102609-1.pdf (PDF)
Mabel_E._Echols@omb.eop.gov
November 13, 2009

Developing Ontologies - Data Engineering

Use-Case Centered Development Process

Source: Word

(Excerpted from USTRANSCOM CICE Report of October, 2003)

It could be argued that it is not the goal of either the existing CDE or the proposed CICE to bring together all of the data in USTRANSCOM, nor to create a single data model for every system and application. These are essentially implementation details. To concentrate on these aspects is to lose sight of the fact that the principal purpose of any system is to provide value to end-users. Several questions then arise: Who are the end users? What decisions do they need to make? How would access to integrated data and information (i.e., data in context) help in the decision process? In general, these are all aspects of a single question: If users had access to all of the data in all of the systems used anywhere in USTRANSCOM, what would they want to do with it?

These questions are complicated by the fact that CICE is bound to change the way in which users perform their work, and will certainly create the possibility that they will be doing different kinds of work than they do now, once the system is in operation. As a result, it is impossible to determine what the real requirements of the system will be once it is built, because no one can precisely predict what kinds of changes are likely to take place.

The solution lies in an iterative development process. The guiding principle of iterative development is to deliver functional software at short intervals to end-users who then provide feedback and guidance on future requirements. The process of defining requirements becomes incremental, and the basis of collaboration between end-users and system designers. Since end-users know how they perform their work under current conditions, they must be considered an important source of input for defining implementation priorities. While designers and developers can foresee future possibilities, they typically cannot predict whether a given piece of functionality will in fact become an important capability for the end-users. Both have knowledge that can guide development, and both are necessary for a successful system. In order for this concept to be realized, it is important to stay focused on the needs of the users. This approach is often referred to as use-case centered development.

There are many forms of use-cases, differing primarily in their relative formality (Cockburn 2001). Basically, a use-case is a story that tells how an ‘actor’ will interact with the ‘system’. Actors can be either human users or other systems. The ‘system’ can be either the entire system or just a part of a system, depending on the objectives and role of the ‘actor’ for a given use-case. Use-cases can provide the basis for requirements discovery and definition. As such, they describe the actor's view of the system under discussion. Use-cases describe the behavior of the system given input from the actor, but only that behavior that the actor is aware of (plus important side effects, if any). However, use-cases do not include details of system implementation or internal design such as data models. They also do not describe the user interface.

A complete use-case includes alternate paths (referred to as ‘extensions’), which describe all the situations under which either the actor or the system can perform different actions based on the current state. The use-case also includes failure scenarios (i.e., conditions under which the system is not able to support the user's goal), along with pre-conditions (i.e., what must be true before the use-case can be executed) and guarantees (i.e., what will be true after the use-case has been successfully executed). Each use-case constitutes a contract for the behavior of the system. To facilitate the implementation of CICE, it would be very helpful to draw up use-cases that describe how the CDE users currently accomplish their goals. Knowledge of current planning and decision-making processes will provide invaluable information to software developers and program managers to determine the data sources and information models (i.e., ontologies) that will be needed in order to implement these existing use-cases in the new knowledge management environment.

Use-cases and iterative development:  A set of use-cases can form the starting point for a development process. In an iterative development process, the system is implemented incrementally and delivered to end-users as soon and as often as possible. As users receive successive versions of the software, their responses frequently result in new or modified use-cases, which must be incorporated in future iterations. New requirements are discovered as users and developers work with the system.

This is in contrast to processes that attempt to specify all the requirements before development begins. Comparatively, iterative development processes tend to produce systems that are more accepted by users, since developers are able to respond to changing goals and needs as implementation progresses. This characteristic is especially important in a system like CICE, which is likely and intended to change the way that people perform their work. Requirements for such a system will evolve as users see new possibilities.

For the implementation of the Information Layer of CICE, use-case oriented iterative development should begin by identifying major stakeholders and user categories. For each use category, it is important to find a representative user to provide input and perspective. The initial impetus for building the existing CDE was undoubtedly at least partly driven by a desire to reduce, and if possible eliminate, the need for planning staff and decision makers to manually bring together data from multiple existing systems in order to accomplish their goals. If this is the case, it is vital to include representatives of these planners and decision makers in the initial group of stakeholders and users.

The process of discovering use-cases for CICE will begin by listing examples of situations requiring multiple data sources. Each example should include the reason for bringing together these data, the list of data sources, the method of data extraction (e.g., existing client applications, direct database queries, etc.) and the type of data retrieved from each source. From this information, an as is use-case can be defined, followed by the corresponding to be use-case, which describes the user's interaction with the planned system.

The initial set of use-cases should be prioritized to ensure that the most important interactions are implemented early in the project’s lifetime. The criteria for use-case prioritization are primarily user-oriented (e.g., How often does the user need to execute this use-case? How much time does it take to gather the data? How significant is the result likely to be?). However, especially during the early stages of the development cycle, developer priorities are also critical. Use-cases may require building significant parts of the planned system architecture, or they may involve parts of the architecture that developers see as risky. Both of these situations would cause a use-case to assume a higher priority from a developer’s point of view, since the architecture should be built as soon as possible and potential risks should be addressed early in the project when alternatives are still available should the risk prove insurmountable.

Priorities may also be affected by the data sources involved in a use-case. To the extent possible, each use-case implemented should include one or more data sources that have already been integrated into the system in earlier use-cases, and in addition should include a data source that is not yet part of CICE. In this way, successive development iterations will build on previous work, while gradually extending the range of integration.

Once the first set of use-cases has been prioritized, developers will determine how many use-cases can be reasonably implemented in the first development cycle. Due to the complexity of integrating new data sources into CICE, it is likely that the first cycle will be somewhat lengthy; – possibly as much as six months long. This duration must be estimated in large part by the data source integration team, based on the specific data sources involved. The first cycle is likely to consist almost entirely of building integration and architectural infrastructure, but will also include a (possibly small) number of use-cases. The key goal of all iterative development processes is that at the end of each development cycle, there will be releasable software. Whether a particular version of the system is released for use or not is a decision that is likely to be made outside the development team, but each development cycle should result in a system that can be released if that decision is made. In the case of CICE, the product of the first cycle should be released at least to the user groups whose use-cases are included in the system.

Later development cycles will follow essentially the same pattern. As users work with the evolving system, they will generate new use-cases and extensions to existing use-cases. Some older use-cases may become obsolete as the new system changes the way that users are working. For the Information Layer of CICE, it is likely that developers will see ways that some use-cases can be modified by constructing agents to determine that a user may benefit from specific information. And, as more data sources are added to the Operational Data Store or Data Warehouse, new types of processing (e.g., OLAP and Data Mining applications) may become both possible and useful. This, in turn, will also increase the number of potential use-cases.

At the beginning of each development cycle, new and old use-cases will be prioritized together, and developers will again determine which ones can be attempted for the next release. Developer priorities for subsequent cycles will include looking for use-cases that allow them to extend functionality created for prior cycles. After the first cycle, the time between releases can be shortened so that users can see the system evolving quickly. This increases the chances of user acceptance, since the effects of their requests for change can be seen over a relatively short period of time.

As the functionality of the system progressively increases, new user groups can be included. These might include, among others, the users of some of the systems that will feed information into CICE. These users might benefit from access to a wider range of data and information than is provided by the system they are currently using. There may also be opportunities to simplify or enhance their work, through the use of information layer capabilities on top of the same data that they currently use.

Iterative use-case centered development processes tend to produce software systems that are accepted by end-users, for several reasons. First, the end-users themselves are directly involved in defining requirements. Second, end-users see the system at an early stage and as it evolves. At each release, users have an opportunity to correct the direction that the development team is moving, and to add new requirements. Third, the requirements implemented during each development cycle are the highest priority, based on the input of all stakeholders, including the users themselves. Together, these aspects of iterative development ensure that at any point in time, the system meets the most important user needs.

Dr. Jens Pohl's Ontology and Software Development Process

Source: Word

February 13, 2011

Last Modified: August 1, 2014

MEMORANDUM

Subject: DR. JENS POHL’S ONTOLOGY AND SOFTWARE DEVELOPMENT PROCESS

1. Purpose

To share the process developed and used by Dr. Jens Pohl of California Polytechnic State University, San Luis Obispo, CA, and Tapestry Solutions to (1) develop taxonomies, ontologies, and information technology (IT) systems that support user communities and (2) apply model driven architecture (MDA) to this process.

2. Basic Point

Dr. Pohl’s repeatable process for IT system development (to include development of taxonomies and ontologies) involves users throughout the process and accommodates changes to requirements during taxonomy, ontology, and IT system development.  This process (1) speeds delivery of taxonomies, ontologies, and IT systems that meet users’ needs, (2) implements incremental and iterative development and testing, (3) is an agile requirements process, and (4) is an example of a flexible and tailorable process.

3. Background

a.Dr. Jens Pohl is a lead developer of the Integrated Computerized Deployment System (ICODES), a program of record (POR) application, first fielded in 1997, that employs an ontology and software agents.

b.He and his associates have used the process described in this document for more than a decade. The process has been refined based on experience and use.  This process is focused on developing an ontology and includes taxonomies as an integral part of IT system development.

c.The March 2012 DoD report to Congress, “2012 Congressional Report on Defense Business Operations,” singled out ICODES as “Positive Example 12.”  However, no mention is made in that report of the process described below using early and continuous user involvement.  This paper further explains some of the reasons ICODES is a positive example for DoD.

4. Overview

a.Paragraphs 5 through 10 summarize the phases of progressive development and user review of prototypes. These repeated reviews by intended users accommodates the reality that users usually cannot describe what they want or need fully or accurately during the development of requirements before software starts to be written.  The expression “I will know it when I see it” captures this reality.

b.Paragraph 11 discusses model driven architecture (MDA).

c.Paragraph 12 explains that the FY 2010 National Defense Authorization Act (NDAA) directed DoD to adopt some of the practices that were long part of Dr. Pohl’s repeatable process.

5. Phase 1, Create Ontology

Development starts with an ontology builder (i.e., the individual who will build the ontology) meeting with end users or individuals who have a good understanding of end users’ needs, processes, information requirements, etc.,  Ideally, the ontology builder spends time observing and participating in the activities of the domain that he is to model and support.  The ontology builder models the processes, etc., in Unified Modeling Language (UML), the Web Ontology Language (OWL), or similar language.  This phase usually takes about four months.

6. Phase 2, Create Application Engine

A software tool processes the ontology produced in Phase 1 to produce an application engine. (Note - production of an application from data modeled using a modeling language is an instance of model driven architecture [MDA])  At this point there are no software agents, no special user interfaces, and no interfaces to external data sources.  The application engine is produced from the ontology by the MDA tool in a day.

7. Phase 3, First Prototype

a.A project team is formed.  It (1) enhances the ontology, (2) builds software agents, (3) builds user interfaces, and (4) builds interfaces to external data sources. Much of the work is accomplished by entering data into the model from which the software is generated.  Every night, the changes and additions made that day are processed by a computer to produce a revised application engine and other software.  This ability to update complex software in a few hours of computer processing is one of the significant advantages of MDA.

b.During this phase, a design document is written explaining (1) the software agents, (2) the user interfaces, and (3) the interfaces to external data sources.

8. Phase 4, Second Prototype

After approximately two months, the first version of the application is presented to users for their comments.  This presentation has two parts:

a. In the first part, developers explain what they understand is needed by the users. (This is, in effect, the read-back that warfighters use when planning operations to ensure they understand commanders’ intent.)

b.In the second part, software is demonstrated.  However, the software is so fragile at this point that users do not directly manipulate the software,  (This is, in effect, a warfighters’ sand-table exercise in which it is shown how forces will move through the terrain and interface with each other.)

c.Comments are solicited from the users. These comments can (1) reveal developers’ misunderstandings of users’ needs, (2) provide suggestions on improvements to interfaces, (3) identify sources of users of data not previously determined or explained, and (4) determine necessary changes to documented requirements.

9. Phase 5, Third Prototype

a.Based on the information collected in the first software demonstration, developers modify the (1) ontology, (2) interfaces to external data sources, (3) user interfaces, etc.  Because the application engine, user interfaces, etc., are produced from a database, revisions can be completed quickly.  It is this use of a database that makes the development process agile.

b.After the software has been modified, users are engaged.  This time the users operate the tools.  User comments, recommendations, changes to requirements, etc., are collected.

c.Two important results of the user-developer discussions during this meeting are:

(1) Users take ownership of the system (i.e., they see it as their system); and

(2)A bond develops between users and developers.

10. Phases 6 and Beyond, Successive Prototypes

a.Developers keep improving the application based on user feedback obtained at regular meetings in which users use the software and make more decisions.

b.On average, there are six cycles of prototype development and developer-user meetings for user assessments.

11. Model Driven Architecture (MDA)

a.MDA is a software design approach for the development of software systems.  It provides a set of guidelines for structuring specifications, which are expressed as models. Model-driven architecture is a type of domain engineering which supports model-driven engineering of software systems. It was launched by the Object Management Group (OMG) in 2001. [1]

b.Dr. Pohl and his associates use MDA tools and approach because of the complexity of the ontologies and other software products they develop.  With MDA, revisions to ontologies, applications engines, interfaces, etc., are determined by first creating or revising the data in the MDA model and then processing that model to produce the application engines, interfaces, etc.

12. FY 10 National Defense Authorization Act (NDAA), Paragraph 804

Paragraph 804 includes a requirement that DoD develop and implement a new acquisition process for information technology systems that includes (a) “early and continual user involvement,” and (b) “

 

[1] http://en.wikipedia.org/wiki/Model-driven_architecture February 12, 2011

Ontology Development Principles

Source: PDF

CDM Technologies Inc., San Luis Obispo, California: Ontology Development
File: ONTOLOGY-Dev-Jan09 Last Update: 11/02/11 (original: 12/30/08) Author: Hisham Assal and Kym Pohl

Introduction

Computers do not have the equivalent of a human cognitive system and therefore store data simply as the numbers and words that are entered into the computer. For a computer to interpret data it requires an information structure that provides at least some level of context. This can be accomplished utilizing an ontology of objects with characteristics and a rich set of relationships to create a virtual version of real world situations and provide the context within which agent logic can automatically operate.

This paper discusses the development of ontologies that serve to provide context for agents to interpret and reason about data changes in decision-support software tools, services and systems. The following brief explanation of key terms and concepts referred to in this paper is provided as an introduction for clarification purposes.

Ontology: The term ontology is loosely used to describe an information structure, rich in relationships that provides a virtual representation of some real world environment (e.g., the context of a problem situation such as the management of a transport corridor, the loading of a cargo ship, the coordination of a military theater, the design of a building, and so on). The elements of an ontology include objects and their characteristics, different kinds of relationships among objects, and the concept of inheritance. Ontologies are also commonly referred to as object models. However, strictly speaking the term ontology has a much broader definition. It actually refers to the entire knowledge in a particular field. In this sense an ontology would include both an object model and the software agents that are capable of reasoning about information within the context provided by the object model (since the agents utilize business rules that constitute some of the knowledge within a particular domain).

Information and context: Information refers to the combination of data with relationships to provide adequate context for the interpretation of the data. The richer the relationships the greater the context (i.e., meaning conveyed by the combination of data with relationships), and the more opportunity for automatic reasoning by software agents.

Information-centric: Software that incorporates an internal information model, such as an ontology, is often referred to as information-centric software. The information model is a virtual representation of the real world domain under consideration and is designed to provide adequate context for software agents (typically rule-based) to reason about the current state of the virtual environment. Since information-centric software has some understanding of what it is processing it normally contains tools rather than predefined solutions to predetermined problems. These tools are commonly software agents that collaborate with each other and the human user(s) to develop solutions to problems in near real-time, as they occur. Communication between information-centric applications is greatly facilitated since only the changes in information need to be transmitted. This is made possible by the fact that the object, its characteristics and its relationships are already known by the receiving application.

Agents: This term has been applied very loosely in recent years. There are several different kinds of agents. Symbolic reasoning agents are most commonly associated with knowledge management systems. These agents may be described as software modules that are capable of reasoning about events (i.e., changes in data received from external sources or as the result of internal activities) within the context of the information contained in an internal information model (i.e., ontology). The agents collaborate with each other and the human users as they monitor, interpret, analyze, evaluate, and plan alternative courses of action.

Methodology for Developing an Ontology

The process of building an ontology consists of a number of steps to ensure the validity and completeness of the final product. The basic steps include:

Step (1) - Identifying the purpose

The intended use of the ontology must be defined in some formal way to guide the development process. Without a well-defined purpose of the ontology, development can continue with no apparent end-state, and the ontology can grow in different directions beyond the control of system developers. Some common purposes of ontologies include the representation of knowledge in a given domain of interest, facilitating communication among system components, re-use by other applications, or as a common language for multiple systems within the same domain.

A good way for defining the purpose of the ontology is by means of use-cases. The intended use of the ontology can be broken down into specific, well-defined use-cases, in which actors and actions are identified, as well as the perceived components that will be involved in each action. Another tool for identifying the purpose of an ontology is a set of questions, which the ontology is supposed to answer.

The ontology is complete, in the context of a given set of requirements, when all the use-cases are supported by ontology concepts and all the questions to be asked can be answered by the current data, populating the ontology.

Step (2) - Building the ontology

Once the domain knowledge has been captured in free form, it is time to start building the ontology. This is the step, where the captured knowledge is formalized and concepts are given specific descriptive names to allow the communication with other stakeholders. The process of building the ontology can be described in the following steps:

1. Capture the knowledge in the domain of interest. Many knowledge acquisition techniques can be applied in this step, including textbooks, interviewing subject matter experts, databases of case studies, analysis reports, and so on. One of the primary methods of capturing knowledge in this domain is utilizing a subject matter expert to formalize the concepts and produce the model. Other sources of knowledge assist the expert in this task, such as books, military manuals, past plan analyses, and training material.

2. Identify the key concepts and relationships in the domain of interest. The key concepts are the ones that relate to the identified purpose of the ontology. They typically answer critical questions or contribute to the communication among system components and concepts that are involved in actions of usecases. Other concepts that help to relate key concepts to each other or add details to key concepts, are considered supporting concepts. For example, the key concepts in a human factors ontology are likely to be: Person, Organization, Communication, Personal Traits, and Behavioral Traits.

3. Produce precise textual definitions of such concepts and relationships. The textual definitions help disambiguate the concepts and define their role in the ontology. Existing textual definitions in standard lexicons can help in this step. For example, the WordNet database offers a good electronic resource for common definitions of English language terms. The use of a lexicon like WordNet also facilitates the search for terms and their synonyms, for the purpose of analyzing free text. Other specialized lexicons, such as military manuals, can also be a good source for accepted definitions for common terms.

4. Identify terms to refer to such concepts and relationships. The selection of good ontology terms helps developers understand the role of each concept and possibly the common uses of it. Also system developers do not need to go back to the formal definition of each term every time they need to use it. The selected terms should be expressive of the concept and close to its natural language description.

5. Obtain agreement on all of the above. It is important for all stakeholders to agree on the selection of concepts and the terms used to refer to them in the ontology. Ontology-based systems are typically a collaborative effort, often among multiple organizations. To facilitate communication among all participants there has to be agreement on the ontology.

6. Select a capture method (e.g., Protégé, UML, etc.). Modeling of ontologies is a step to formalizing the captured knowledge and producing an artifact, which can communicate that knowledge to other stakeholders. Most modeling methods have a graphical notation to easily connect concepts and navigate through the ontology. The criteria for selection of a modeling method are:

  • Coverage: Does the modeling method provide enough elements to represent all of the captured concepts and the types of relationships that exist among them?
  • Granularity: How much detail can the modeler represent in a concept?
  • Learning curve: Is this modeling method a standard method, which modelers are already familiar with? Or is it a new method that requires investment of time and effort to learn to use efficiently?

Protégé is the modeler of choice for OWL-based ontologies. There are other tools that support OWL development, such as Concept Maps, but the support that is offered by Protégé is stronger in visualization and ontology navigation.

Step (3) - Coding the ontology

The implementation of ontology-based systems requires translating the ontology model into an implementation language. The language chosen for coding an ontology (e.g., formal logic, UML, OWL, etc.) has to provide the following characteristics:

  • Conceptual distance: The ability of the language to represent abstract concepts at multiple levels of abstraction
  • Expressive power: The ability to represent complex concepts with consistent language constructs.
  • Standards compliant: The language should follow accepted standards and notations to allow for better communication among development team members.
  • Translatability: The language constructs have formal structures that can be converted to forms in other languages, without ambiguity.
  • Guidelines: The model development process is supported by a set of guidelines and best practices.
  • Formal semantics: The intended meaning of each language construct is unambiguous and well-defined.
  • Flexibility: The ability to represent concepts in different ways, using different constructs.
  • User base: The availability of user groups provides support for ontology development, through the exchange of experiences and best practices.
  • Availability: The language has to be available, preferably, in the public domain, along with tools to support its use.

For example, the selected coding language may be OWL to facilitate communication with other system developers, especially in the case of a multi-organization effort. OWL satisfies many of the selection criteria mentioned above.

  • Conceptual distance: OWL allows the representation of abstract concepts, maintaining its level of abstraction and allowing for details as needed.
  • Expressive power: OWL employs description logic in a dynamic environment utilizing the open world assumption. Description logic is a powerful mechanism for stating concepts.
  • Standards: OWL is based on RDF, which is a standard that is becoming more popular with many tools for processing formats.
  • Translatability: As a formal language with well-defined semantics, OWL can be translated into other implementation languages, especially RDF-based languages. The degree to which the translation preserves all of the ontology features depends on the target language and its supported features.
  • Formal semantics: OWL has well-defined semantics for language constructs. The semantics capability is supported by Reasoner specifications that describe what a valid structure should be.
  • Flexibility: OWL offers a wide range of constructs to model concepts and relationships. In most cases, the modeler provides multiple choices to model any concept. The selection of a particular construct is usually determined by the use-cases for the concept and the relationships to other concepts.
  • User base: OWL enjoys a strong user base for OWL, especially using the modeler Protégé. There are also many conferences and user group meetings and on-line forums supporting the development of ontologies in OWL.
  • Availability: The OWL specification is published in the public domain and tools for modeling in OWL are available for free (e.g., Protégé and CMAPtools).

The next step is to translate the model into actual system implementation. Two aspects need to be addressed in this step, namely: verification tools; and, code generation. Checking tools are needed to make sure that the ontology structure is consistent and remains consistent during system operation, after changes have been made. Code generation tools assist in taking a formal ontology consistently and repeatedly from a formal language to an implementation language. System implementation typically goes through multiple iterations that may require re-writing the basic model, or large sections of it. Utilizing code generation tools makes this task easier.

Step (4) - Integrating with existing ontologies

It is often the case that an ontology is being developed as an extension of an existing ontology or to connect with an existing ontology. In such cases, integration with the existing ontology must be carefully considered

  • Existence of other ontologies that are relevant to this ontology.
  • All assumptions have to be made explicit
  • Agreement has to be achieved regarding concepts and relationships
Step (5) - Evaluation

The ontology must be examined from a technical perspective, along with the associated software environment, and the documentation with respect to a frame of reference, which includes:

  • Requirements specifications.
  • Competency questions.
  • <real p="" world=""> </real>

The selection of the frame of reference and the evaluation criteria have to align with the purpose and requirements of the ontology. The semantic correctness of an ontology is crucial for the proper functioning of applications. In order to evaluate an ontology it is useful to employ a methodology that has two main components: structural analysis; and, domain knowledge analysis.

Structural Analysis: This involves the analysis of the structure of concepts in terms of hierarchy (taxonomy) and in terms of the relationships among concepts. The main criteria for this analysis are:

  • Uniqueness of concepts (no redundancy): Every relevant concept in the domain should be represented in a clear and concise manner within the model. Concepts that are similar or have some common properties with other concepts should be represented in relationship to the existing concepts, either in a class hierarchy or through other types of relationships such as “part-of”. The ease with which existing concepts co-exist and new ones can be added is a key indicator of the model’s elegance and sophistication.
  • No circular reference should exist at any level: Circular references can occur when a parent class in a class hierarchy inherits from a child class at any level down the hierarchy. This circular reference may not be obvious if the child class is more than two levels down from the parent class. Circular references are problematic because they confuse the semantics of the two concepts (e.g., “… a jet plane is a kind-of aircraft” and “… an aircraft is a kind-of jet plane”).
  • Levels of Abstraction: Class hierarchies can have any number of levels, where every level introduces more details to the classes at that level. The choice to add many attributes to a class in one level of the hierarchy or to create many levels with few attributes at each level has implications on the semantics of the model and on the operational aspects of applications that use this model.
  • Complexity (number of concepts + number of relationships for each concept): The complexity of an ontology plays an important role in its usability. Applications typically traverse a collection of related concepts to form a context for reasoning or decision making. The more complex the ontology, the more involved it becomes for the application (and for the applicationdeveloper) to form the proper context.

Domain Knowledge Analysis: Focuses on the purpose of the ontology. Use-cases for the application identify its information needs and form the basis for assessing the ontology’s completeness. The criteria for this analysis are:

  • Coverage of use-cases (completeness): Application use-cases define the different ways the ontology will be used. All concepts that are referenced in the target use-cases must exist in the ontology in some form (either directly or inferred). Other concepts not explicitly mentioned in any use-case may exist in the ontology serving as extended specifications for further reasoning or increased scope.
  • Partitioning: The arrangement of classes in a hierarchy, where features of subclasses do not overlap, forms a disjoint decomposition of classes. When subclasses represent all the possible classifications of a super class, then this is called exhaustive decomposition. In this case, any instance of the super class is also an instance of one of the subclasses. These two properties of ontology partitioning (i.e., disjoint decomposition and exhaustive decomposition) place integrity constraints on the ontology and provide for tighter semantics, as well as a more powerfully expressive ontology that in turn leads to more straightforward reasoning capabilities.
  • Extensibility: When the incorporation of additional concepts is required, perhaps due to the need to support additional use-cases, it should be possible to add these concepts without the need to re-structure the entire ontology. If engineered correctly, the incorporation of extended or entirely new concepts can be achieved in a fairly isolated manner without unduly impacting unrelated areas of the model or actual model users.
  • Documentation: The intended meaning and the usage of each concept must be clearly documented, so that reasoning facilities can effectively and appropriately employ them.
Step (6) - Documentation

The development of software components that are based on an ontology relies on good documentation of the ontology and the availability of the documentation to all developers. The documentation must include

  • Purpose and intended use of the ontology
  • Assumptions made at every level about concepts and their relationships.
  • Primitives used to express the definitions (i.e., meta-ontology).
  • Relationship to existing ontologies.

Using a lexicon such as WordNet standardizes the definitions across multiple developer teams and across organizations, and reduces the chances for ambiguity in dealing with concepts that may have multiple word-senses. The choice of Word-Net also offers the opportunity for other ontologies to integrate with the ontology under consideration, by examining the standard definition of its concepts and deciding on concept compatibility

References

Barber K., A. Goel, D. Han, J. Kim, D. Lam, T. Liu, M. MacMahon, C. Martin and R. McKay (2003); ‘Infrastructure for Design, Deployment and Experimentation of Distributed Agent-based Systems: The Requirements’; The Technologies, and an Example, Autonomous Agents and Multi-Agent Systems. Volume 7, No. 1-2 (pp 49-69).

Brown P. (2008); ‘Implementing SOA: Total Architecture in Practice’; Addison-Wesley

Erl T. (2008); ‘SOA: Principles of Service Design’; Prentice Hall.

Fahad M. and M. Abdul Qadir (2008); ‘A Framework for Ontology Evaluation’; Proceedings International Conference on Conceptual Structures (ICCS), Toulouse, France, July 7-11 (pp. 149-158).

Forgy C. (1982); ‘Rete: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem’; Artificial Intelligence, 19 (pp. 17–37).

Jennings N., K. Sycara and M. Wooldridge (1998); ‘A Roadmap of Agent Research and Development’; Autonomous Agents and Multi-Agent Systems, Vol. 1 (pp. 7-38).

Wooldridge M. and N. Jennings (1995); ‘Intelligent Agents: Theory and Practice’; The Knowledge Engineering Review, Vol. 10(2) (pp. 115-152).

Lu Q. and V. Haarslev (2006); ‘OntoKBEval: A Support Tool for DL-based Evaluation of OWL Ontologies’; Proceedings OWL: Experiences and Directions 2006, Athens, Georgia, November 10-11.

Supekar K. (2005); ‘A Peer-Review Approach for Ontology Evaluation’; Proceedings 8th International Protégé Conference, Madrid, Spain, July 18-21.

Wooldridge M., N. Jennings and D. Kinny (1999); ‘A Methodology for Agent-Oriented Aanalysis and Design’; Proceedings Third International Conference on Autonomous Agents (Agents-99), Seattle, Washington.

ICODES and Ontology Development

Source: Word

August 15, 2014

DRAFT

MEMORANDUM

Subject:  ICODES and Ontology Development

1. Purpose

To record information that addresses issues raised by Mr. David Blevins of Booz Allen Hamilton (BAH) in a WebEx presented by Mr. Boone Pendergrast and Mr. Matt Parrott of Tapestry Solutions on August 7, 2014.

2. Basic Points

a.The Integrated Computerized Deployment System (ICODES) and its ontology (a) were developed for a very reasonable cost, (b) within a reasonable period, and (c) with its ontology being used effectively.  These are very different results than Mr. Blevins described in his presentation at the July 28, 2014, meeting of the Federal Big Data Working Group.  Mr. Blevins described efforts based on HL-7 and other health-care data models and ontologies.

b.ICODES developers and others have documented the processes and considerations used in ICODES development.

c.The development of ICODES ontology was (1) focused on the reality of ICODES intended users and (2) tested as it was developed.

3. Background

a. The WebEx followed on to a Federal Big Data Working Group meeting on July 28, 2014.

b.During the WebEx, Mr. Blevins asked several questions about how ICODES and its ontology were developed and the cost thereof.  These questions were prompted in part by (a) his observing the application of ontology in the medical community and (b) a comment by Ms. Kay Goodier at the meeting of the Federal Big Data Working Group on July 28, 2014, that an office within DoD has directed that contractors’ estimates of the cost to develop an ontology will be tripled when preparing government cost effort.  This tripling of the contractors estimated cost is based on experience.

4. Discussion

a.ICODES and its ontologies have avoided the problems that Mr. Blevins has seen with ontologies and their use in information technology (IT) intended to support health care.  This is probably largely the result of the methods used to develop ICODES and its ontology.

b.The most important of these methods are:

(1)Start development with developers immersing themselves into the reality of the intended users.

(2)Follow a repeatable process in which users are shown and test the developing product that will use the ontology.  This enables evaluation of the ontology, a tool, and the uses to which they will be put.

(3)Document lessons learned and exploit lessons learned, regardless of whom or what organization learned and document the lesson.

c.ICODES’ development started in 1992 with prototypes.  ICODES was fielded to users in 1997.  Initial, ICODES was a tool for planning the loading of ships.  Since 1997, ICODES has been improved and extended until ICODES (a) is the DoD program of record for planning the loading of ships, aircraft, trains, and trucks; (b) has a version that is hosted in a DoD cloud and (c) has a version that can be used disconnected from the cloud (e.g., when a ship at sea lacks the bandwidth to use the cloud hosting ICODES).

d.Because of ICODES and its ontology have been fielded and are in operational use, the processes that are explained in the documentation listed below has not only been developed for a single effort but also refined as improved versions of ICODES have been developed through the years.

5. Documents on Concepts and Methods

The following short documents explain concepts and methods that ICODES developers have used with very good results.

a.“Dr. Jens Pohl’s Ontology and Software Development Process” (file name - Enterprise Info-011 Pohl ICODES Development Process - short - Taxonomy Pilot.doc).  This four-page document explains the process that has been developed and refined for developing an ontology and software to exploit the ontology.

b.“Use-Case Centered Development Process” (file name - Use-Case Centered Development Process - Pohl.doc).  This three-page document explains the importance and role of use cases.

c.“Ontology Development Principles” (file name - Ontology Development Principles.pdf).  This seven-page document explains concepts (e.g., ontology and information-centric); and a six-step process.

DoD Standardization of Military and Associated Terminology

Source: PDF

Department of Defense

INSTRUCTION

NUMBER 5025.12

August 14, 2009

DA&M

SUBJECT: Standardization of Military and Associated Terminology References:

(a) DoD Directive 5025.12, “Standardization of Military and Associated Terminology,” June 30, 2004 (hereby canceled)

(b) DoD Directive 5105.53, “Director of Administration and Management,” February 26, 2008

(c) Joint Publication 1-02, “Department of Defense Dictionary of Military and Associated Terms,” as amended

(d) Chairman of the Joint Chiefs of Staff Instruction 5705.01C, “Standardization of Military and Associated Terminology,” February 19, 2008

1. Purpose

This Instruction:

a. Reissues Reference (a) as a DoD Instruction in accordance the authority in Reference (b).

b. Establishes the overarching policy, procedures and requirements for identifying, deleting, modifying, and incorporating definitions into Reference (c).

c. Continues to authorize the development, publication, and maintenance of Reference (c) in accordance with this Instruction and Reference (d).

2. Applicability

This Instruction applies to OSD, the Military Departments, the Office of the Chairman of the Joint Chiefs of Staff and the Joint Staff, the Combatant Commands, the Office of the Inspector General of the Department of Defense, the Defense Agencies, the DoD Field Activities, and all other organizational entities within the Department of Defense (hereafter referred to collectively as the “DoD Components”).

3. Policy

It is DoD policy:

a. To improve communications and mutual understanding within the Department of Defense, with other Federal Agencies, and between the United States and its international partners through the standardization of military and associated terminology.

b. That the DoD Components use Reference (c) as the primary terminology source when preparing correspondence, to include policy, strategy, doctrine, and planning documents.

c. That the DoD Components use the terminology and approval criteria in Enclosure 2 when considering terms for inclusion in Reference (c). Additional information on the criteria for including terminology in Reference (c) can be found in the preface of Reference (c) and in Reference (d).

d. That this Instruction does not restrict the use and publication of terms and definitions for unique functional areas or unilateral use by individual DoD Components. Any military or associated terms or definitions that involve DoD-wide applicability or usage across functional boundaries, may be nominated for inclusion in Reference (c) if appropriate.

4. Responsibilities.

See Enclosure 1

5. Releasability Unlimited

This Instruction is approved for public release. Copies may be obtained through the Internet from the DoD Issuances Web Site at http://www.dtic.mil/whs/directives.

6. Effective Date

This Instruction is effective immediately.

Michael L. Rhodes Acting Director Administration and Management

Enclosures

1. Responsibilities

2. DoD Terminology and Approval Criteria

Enclosure 1

RESPONSIBILITIES 1. DIRECTOR OF ADMINISTRATION AND MANAGEMENT (DA&M).

The DA&M shall establish policy on the compilation and publication of standardized military terminology in accordance with Reference (b).

2. DIRECTOR, WASHINGTON HEADQUARTERS SERVICES (WHS). The Director, WHS, under the authority, direction, and control of the DA&M, shall:

a. Serve as the OSD and WHS terminology point of contact; staff proposed additions, deletions, and changes to Reference (c) within the OSD Components and WHS.

b. Forward recommended terminology changes to Reference (c) to the Chairman of the Joint Chiefs of Staff.

c. Represent the OSD Components and WHS in terminology working groups convened in accordance with Reference (d).

3. CHAIRMAN OF THE JOINT CHIEFS OF STAFF. The Chairman of the Joint Chiefs of Staff shall:

a. Manage the DoD Terminology Program.

b. Develop, publish, and maintain Reference (c) in accordance with this Instruction.

c. Resolve DoD terminology issues. Disapproved OSD nominated terms to Reference (c) shall be referred to the DA&M.

4. HEADS OF THE OSD AND DOD COMPONENTS. The Heads of the OSD and DoD Components shall ensure that any term and its definition having DoD-wide applicability and usage be submitted to their Component terminology point of contact for processing and inclusion in Reference (c) in accordance with this Instruction and Reference (d).

Enclosure 2

DOD TERMINOLOGY AND APPROVAL CRITERIA

The following criteria shall be used by the DoD Components when considering terms for inclusion in Reference (c).

1. DOD TERMINOLOGY CRITERIA. For a term to be considered for inclusion in Reference (c), it must meet the following criteria:

a. Inadequate coverage in a standard, commonly accepted dictionary.

b. Terminology is of general military or associated significance. Technical or highly specialized terms may be included if they can be defined in easily understood language and if their inclusion is of general military or associated significance.

c. Term is not a code word, brevity word, or NATO-only term.

d. Term is not Component or Service-specific or functionality-specific unless it is commonly employed by U.S. joint forces as a whole.

2. APPROVAL CRITERIA. Terminology shall be approved for inclusion in Reference (c) when it is:

a. Directed by the Secretary or Deputy Secretary of Defense, or the Chairman of the Joint Chiefs of Staff.

b. Coordinated by the sponsoring DoD Component with OSD, the Office of the Chairman of the Joint Chiefs of Staff, and the Military Departments at a minimum, and approved:

(1) In joint doctrine publications for inclusion in Reference (c);

(2) In DoD or CJCS issuances for inclusion in Reference (c); or

(3) NATO agreed terminology.

c. Nominated for inclusion in Reference (c) by the Heads of the OSD or DoD Components, coordinated with OSD, the Office of the Chairman of the Joint Chiefs of Staff, and the Military Departments at a minimum, and approved according to the provisions of this Instruction and Reference (d).

CJCS Standardization of Military and Associated Terminology

CHAIRMAN OF THE JOINT CHIEFS OF STAFF INSTRUCTION

Source: PDF

J-7 CJCSI 5705.01D DISTRIBUTION: A, B, C, JS-LAN, S 10 November 2010 STANDARDIZATION OF MILITARY AND ASSOCIATED TERMINOLOGY References: a. DODI 5025.12, 14 August 2009, “Standardization of Military and Associated Terminology” b. JP 1-02, “Department of Defense Dictionary of Military and Associated Terms” c. CJCSI 5711.01 series, “Policy on Action Processing” d. CJCSI 5120.02 series, “Joint Doctrine Development System” e. CJCSM 5120.01 series, “Joint Doctrine Development Process” f. Allied Administrative Publication-6, “NATO Glossary of Terms and Definitions (English and French)” 1. Purpose. To establish policy for the standardization of Department of Defense (DOD) terminology. 2. Cancellation. CJCSI 5705.01C, 19 February 2008, “Standardization of Military and Associated Terminology,” is canceled. 3. Applicability. This instruction applies to the Office of the Secretary of Defense (OSD) in accordance with reference a; the Military Services; the Joint Staff, including activities and DOD agencies reporting through the Chairman of the Joint Chiefs of Staff; the combatant commands; and other DOD components. 4. Policy. In accordance with reference a, DOD policy on terminology is to improve communications and mutual understanding within the Department of Defense, with other federal agencies, and between the United States and its international partners through standardization of military and associated terminology. 5. Definitions. See reference b.

Joint Doctrine Development System

Source: PDF

J-7 CJCSI 5120.02C DISTRIBUTION: A, B, C 13 January 2012 JOINT DOCTRINE DEVELOPMENT SYSTEM References: See Enclosure C. 1. Purpose. This instruction sets forth policy to assist the Chairman of the Joint Chiefs of Staff in implementing the responsibility to “develop and establish doctrine for all aspects of the joint employment of the Armed Forces” as directed in references a and b. 2. Cancellation. CJCSI 5120.02B, “Joint Doctrine Development System,” 4 December 2009, is canceled. 3. Applicability. The policy herein applies to the Joint Staff, Services, Combatant Commands, combat support agencies, and any organization involved in the development of joint doctrine. 4. Policy. This instruction establishes the role of joint doctrine and explains the responsibilities of the Joint Staff, combatant commands, Services, and combat support agencies for joint doctrine development. 5. Definitions. See Glossary. 6. Responsibilities. The Director, Joint Force Development, Joint Staff (J-7), is responsible for managing the joint doctrine development system outlined in this instruction. 7. Summary of Changes. This update reflects new roles and responsibilities resulting from the disestablishment of U.S. Joint Forces Command in 2011 and provides information on the role of the National Guard Bureau. In addition, information and procedures for the joint doctrine development process, to include joint publication staffing, revising, and formatting, have

Joint Doctrine Development Process

Source: PDF

J-7 CJCSM 5120.01 DISTRIBUTION: A, B, C 13 January 2012 JOINT DOCTRINE DEVELOPMENT PROCESS References: See Enclosure H. 1. Purpose. This manual sets forth procedures for the development of joint doctrine in support of the Chairman of the Joint Chiefs of Staff, implementing the responsibility to "develop and establish doctrine for the joint employment of the Armed Forces" as directed in references a and b and as established in reference c. 2. Cancellation. None. 3. Applicability. The procedures herein apply to the Joint Staff, Services, Combatant Commands, combat support agencies, and any organization involved in the development of joint doctrine. 4. Procedures. Detailed procedures for the development and staffing of joint doctrine are provided in the enclosures. 5. Summary. The information contained in Enclosures B, E, F, and G was previously published in reference c. This manual separates the joint doctrine development process from the policy in reference c, which establishes the role of joint doctrine and explains the responsibilities of the Joint Staff, Combatant Commands, Services, and combat support agencies for joint doctrine development. Enclosures C and D provide information on the key doctrine element (KDE) framework and use of the Joint Doctrine Development Tool (JDDT), respectively. 6. Releasability. This manual is approved for public release; distribution is unlimited. DOD components (to include the combatant commands), other Federal agencies, and the public may obtain copies of this manual through the

Marine Corps Planning Process

Source: PDF

MCWP 5-1
U.S. Marine Corps
PCN 143 000068 00

GFM DI Implementation: Unique Identification (UID) for GFM Volume 1

Source: PDF

SUBJECT: Global Force Management Data Initiative (GFM DI) Implementation: Unique Identification (UID) for GFM References: See Enclosure 1 1. PURPOSE a. Manual. Pursuant to DoD Instruction 8260.03 (Reference (a)), the authority in DoD Directive (DoDD) 5124.02 (Reference (b)), and in accordance with DoDD 8320.03 (Reference (c)), this Manual implements policy, assigns responsibilities, and provides procedures and rules for the electronic documentation of force structure data across the Department of Defense. b. Volume. Volume 1 of this Manual sets forth responsibilities and procedures for the UID of force structure data in software application programs known as GFM organization servers (OSs) and includes: (1) The generation of force management identifiers (FMIDs) for internal use by OSs. (2) The integration into force management systems external to the OSs of that subset of FMIDs titled organization unique identifiers (OUIDs). (3) Acquiring seed values for use as FMID prefixes from the Enterprise-wide Identifier (EwID) Seed Server (ESS), the chosen technical implementation for FMIDs. 2. APPLICABILITY. This Volume applies to OSD, the Military Departments, the Office of the Chairman of the Joint Chiefs of Staff and the Joint Staff, the Combatant Commands, the Office of the Inspector General of the Department of Defense, the Defense Agencies, the DoD Field Activities, and all other organizational entities within the Department of Defense (hereafter referred to collectively as the “DoD Components”). 3. DEFINITIONS. See Glossary.

GFM DI Implementation: The Organizational and Force Structure Construct Volume 2

Source: PDF

SUBJECT: Global Force Management Data Initiative (GFM DI) Implementation: The Organizational and Force Structure Construct (OFSC) References: See Enclosure 1 1. PURPOSE a. Manual. Pursuant to DoD Instruction (DoDI) 8260.03 (Reference (a)), the authority in DoD Directive (DoDD) 5124.02 (Reference (b)), and in accordance with DoDD 8320.03 (Reference (c)), this Manual implements policy, assigns responsibilities, and provides procedures and rules for the electronic documentation of force structure data across the DoD. b. Volume. This Volume sets forth responsibilities and procedures for implementation of the OFSC for authorized force structure in GFM DI Organization Servers (OSs) and for task organized force structure in systems that consume OS data. 2. APPLICABILITY. This Volume applies to OSD, the Military Departments, the Office of the Chairman of the Joint Chiefs of Staff and the Joint Staff, the Combatant Commands, the Office of the Inspector General of the DoD, the Defense Agencies, the DoD Field Activities, and all other organizational entities within the DoD (hereafter referred to collectively as the “DoD Components”). 3. DEFINITIONS. See Glossary. 4. POLICY. In accordance with Reference (a), this Volume implements DoD policy to: a. Electronically document and maintain currency of authorized force structure in a suite of authoritative data sources (ADSs), known as GFM DI OSs, hereafter referenced to as OSs, in a comprehensive and hierarchical format usable by systems across the DoD as a common reference for data integration, and to ensure that force structure data is visible, accessible, understandable, and trusted across the DoD, as required by DoDD 8320.02 (Reference (d)).

Organizational and Force Structure Construct (OFSC) for Global Force Management (GFM)

Source: PDF

SUBJECT: Organizational and Force Structure Construct (OFSC) for Global Force Management (GFM) References: (a) Strategic Planning Guidance (SPG)1 FY 2006-2011, March 1, 2004 (b) Deputy Secretary Memorandum, “Actions from the Senior Readiness Oversight Council of December 10 2003,” January 20, 2004 (c) DoD Directive 7730.65, “Department of Defense Readiness Reporting System (DRRS),” June 3, 2002 (d) DoD Instruction 7730.64, “Automated Extracts of Manpower and Unit Organizational Element Files,” December 11, 2004 (e) through (k), see Enclosure 1 1. PURPOSE This Instruction: 1.1. Establishes policy and assigns responsibility under Reference (a) for developing standardized force structure data that will provide on-demand information in a net-centric environment. Force structure data will be available electronically in a joint hierarchical way for integration and use throughout the Department of Defense. 1.2. Establishes policy and directs implementation of Force Management Identifiers (FMIDS), following the direction of the Senior Readiness Oversight Council according to Reference (b), to uniquely identify and tag force structure data at all organizational levels of the Department of Defense. This policy will provide the foundation for net-centric data management throughout the Department of Defense. Such identifiers will facilitate GFM, readiness reporting according to Reference (c), and manpower management according to Reference (d).

Slides from GFM DI Briefing

Source: PPT

Title Slide

Core Planning Memo-102 GFM DI Slide1.PNG

Problem - Solution

This slide is from a briefing prepared on the GFM DI and the next actions to be taken to implement it.  The core challenge appears to be implementing the GFM DI in the face of resistance by those who fear they have something to lose.

The July 8, 2009, Vice Chief of Staff of the Army memorandum ‘Building an Enduring Assessment Support Capability” makes the same point:  “The Army currently maintains disparate sets of information used to gauge performance, conduct assessments, and information decisions makers decisions.  In many cases the Army’s disparate sets of information are functionally focused, lack transparency, access can be unnecessarily limited, and in some cases, quality control is inadequate”  The desired end state is “Reliable, cohesive, transparent, and accessible data.”

In a September 29, 2009, email, General Hondo Campbell, USA, Commander of US Army Forces Command, said in an email that the Army needs to “be able to holistically ‘see itself’ – a capability we do not have today.  Instead we ‘see’ the Army through respective lens of individual ‘silos.’”

Core Planning Memo-102 GFM DI Slide2.PNG

 

Data Engineering for D-Day

Cartoon from the "New Yorker"

CorePlanningMemoCartoon.png

Core Memo-002 Gavin Quotation

Word

December 26, 2007

MEMORANDUM

Subject:  LIEUTENANT GENERAL JAMES M. GAVIN, USA, OBSERVATIONS ON THE CONTRIBUTIONS OF STANDARD TERMINOLOGY AND TACTICS, TECHNIQUES, AND PROCEDURES, TO D-DAY’S SUCCESS

1. Purpose.  To use information from Lieutenant General James M. Gavin’s On to Berlin:  Battles of an Airborne Commander 1943-1946, to highlight the importance of written documents that standardize terminology and procedures.

2. Basic Point.  Standardizing and documenting tactics, techniques, and procedures (TTP) and supporting terminology are usually overlooked.  This is partly because this requires collaboration between people with different perspectives, something which takes much time and effort and provides little if any immediate gratification.  The lack of such standardizing and documenting, however, limits the potential for creating combine-arms forces from individual units.

3. General Gavin’s Experience.  General Gavin commanded the 82nd Airborne Division in World War II.  The quotation below from his book, On to Berlin:  Battles of an Airborne Commander 1943-1946, explains how he created a foundation of common terminology and TTPs as a basic foundation piece for D-Day.  Note the effort that he devoted to collaboration and the difficulty of getting members of the various US and multinational partners’ forces to agree.  General Gavin notes that gaining concurrence requires much work, but success brings an ability to move to an entirely new level [1]

I began to realize that one of our most critical needs was to standardize the operating practices of our forces.  Those of us who had fought in the Mediterranean theater had developed combat practices that we soon took for granted.  The new formations in the United Kingdom obviously had a great deal to learn and were anxious to get started.  Such simple things as plane loading, warning and jump signals, flight formations, and even simple terminology had to be agreed upon.  For example, the British preferred to fly their transports in what they called “bomber stream” formations, which were no formations at all, simply individual planes flying in trail in a random manner.  We preferred to fly in troop-carrier group formations of thirty-six to forty-five airplanes that flew in a V or V’s with three aircraft in each V.  We had always referred to the area where we land as the “jump area.”  The British referred to it as “the drop zone,” or simply “the DZ.”  After conferring with the staff of the 101st Airborne Division, all of whom were good friends of mine, I went to work on a document to standardize the American airborne practices and I was able to publish the first memorandum on the subject, “Training Memorandum on the Employment of Airborne Forces,” late in 1943.

This was followed by the drafting of a document that would standardize the operating procedures for all the forces involved, British and American, R.A.F. and Army Air Forces and both Navies.  In the drafting of it, I was greatly aided by R.A.F. Wing Commander Dugald McPherson.  It turned out to be a very tedious task that involved frequent visits to all the higher headquarters.  Everyone wanted to discuss, alter, criticize, and contribute to it, and it was terribly important that they all have an opportunity to do so, for when it was completed and everyone was properly trained, we would have achieved a state of complete flexibility in the employment of our forces.  The British could fly in American transports; we could fly in British gliders, and so forth.

 

[1] The quoted passage starts on page 88 of On to Berlin:  Battles of an Airborne Commander 1943-1946 by James M. Gavin, published by The Viking Press, New York, in 1978

Page statistics
1527 view(s) and 29 edit(s)
Social share
Share this page?

Tags

This page has no custom tags.
This page has no classifications.

Comments

You must to post a comment.

Attachments