CISSP Exam Cram: Security Architecture and Models
Date: Feb 4, 2013
Introduction
The security architecture and models domain deals with hardware, software, security controls, and documentation. When hardware is designed, it needs to be built to specific standards that should provide mechanisms to protect the confidentiality, integrity, and availability of the data. The operating systems (OSs) that will run on the hardware must also be designed in such a way as to ensure security. Building secure hardware and operating systems is just a start. Both vendors and customers need to have a way to verify that hardware and software perform as stated; that both the vender and client can rate these systems and have some level of assurance that such systems will function in a known manner. This is the purpose of evaluation criteria. They allow the parties involved to have a level of assurance.
This chapter introduces the trusted computer base and the ways in which systems can be evaluated to assess the level of security. To pass the CISSP exam, you need to understand system hardware and software models and how models of security can be used to secure systems. Standards such as Common Criteria Information Technology System Evaluation Criteria (ITSEC) and Trusted Computer System Evaluation Criteria (TCSEC) are covered on the exam.
Computer System Architecture
At the core of every computer system is the central processing unit (CPU) and the hardware that makes it run. The CPU is just one of the items that you can find on the motherboard. The motherboard serves as the base for most crucial system components. These physical components interact with the OS and applications to do the things we need done. Let’s start at the heart of the system and work our way out.
Central Processing Unit
The CPU is the heart of the computer system. The CPU consists of the following:
- An arithmetic logic unit (ALU) that performs arithmetic and logical operations
- A control unit that extracts instructions from memory and decodes and executes the requested instructions
- Memory, used to hold instructions and data to be processed
The CPU is capable of executing a series of basic operations, including fetch, decode, execute, and write. Pipelining combines multiple steps into one process. The CPU has the capability to fetch instructions and then process them. The CPU can function in one of four states:
- Ready state—Program is ready to resume processing
- Supervisor state—Program can access entire system
- Problem state—Only nonprivileged instructions executed
- Wait state—Program waiting for an event to complete
Because CPUs have very specific designs, the operating system must be developed to work with the CPU. CPUs also have different types of registers to hold data and instructions. The base register contains the beginning address assigned to a process, whereas the limit address marks the end of the memory segment. Together, the components are responsible for the recall and execution of programs. CPUs have made great strides, as Table 5.1 documents. As the size of transistors has decreased, the number of transistors that can be placed on a CPU has increased. By increasing the total number of transistors and ramping up clock speed, the power of CPUs has increased exponentially. As an example, a 3.06GHz Intel Core i7 can perform about 18 million instructions per second (MIPS).
Table 5.1. CPU Advancements
CPU |
Date |
Transistors |
Clock Speed |
8080 |
1974 |
6,000 |
2MHz |
80386 |
1986 |
275,000 |
12.5MHz |
Pentium |
1993 |
3,100,000 |
60MHz |
Intel Core 2 |
2006 |
291,000,000 |
2.66GHz |
Intel Core i7 |
2009 |
731,000,000 |
3.06GHz |
Two basic designs of CPUs are manufactured for modern computer systems:
- Reduced Instruction Set Computing (RISC)—Uses simple instructions that require a reduced number of clock cycles.
- Complex Instruction Set Computing (CISC)—Performs multiple operations for a single instruction.
The CPU requires two inputs to accomplish its duties: instructions and data. The data is passed to the CPU for manipulation where it is typically worked on in either the problem or the supervisor state. In the problem state, the CPU works on the data with nonprivileged instructions. In the supervisor state, the CPU executes privileged instructions.
The CPU can be classified in one of several categories depending on its functionality. When the computer’s CPU, motherboard, and operating system all support the functionality, the computer system is also categorized according to the following:
- Multiprogramming—Can interleave two or more programs for execution at any one time.
- Multitasking—Can perform one or more tasks or subtasks at a time.
- Multiprocessor—Supports one or more CPUs. Windows 98 does not support multiprocessors, whereas Windows Server 2008 does.
A multiprocessor system can work in symmetric or asymmetric mode. Symmetric mode shares resources equally among all programs. Asymmetric mode can set a priority so that one application can have priority and gain control of one of the processors. The data that CPUs work with is usually part of an application or program. These programs are tracked by a process ID (PID). Anyone who has ever looked at Task Manager in Windows or executed a ps command on a Linux machine has probably seen a PID number. Fortunately, most programs do much more than the first C code you wrote that probably just said, “Hello World.” Each line of code or piece of functionality that a program has is known as a thread.
A program that has the capability to carry out more than one thread at a time is known as multi-threaded. You can see an example of this in Figure 5.1.
Figure 5.1. Processes and threads.
Process activity uses process isolation to separate processes. These techniques are needed to ensure that each application receives adequate processor time to operate properly. The four process isolation techniques used are
- Encapsulation of objects—Other processes do not interact with the application.
- Virtual mapping—The application is written in such a way that it believes it is the only application running.
- Time multiplexing—This allows the application or process to share resources.
- Naming distinctions—Processes are assigned their own unique name.
An interrupt is another key piece of a computer system. An interrupt is an electrical connection between a device and the CPU. The device can put an electrical signal on this line to get the attention of the CPU. The following are common interrupt methods:
- Programmed I/O—Used to transfer data between a CPU and peripheral device.
- Interrupt-driven I/O—A more efficient input output method but requires complex hardware.
- I/O using DMA—I/O based on direct memory access can bypass the processor and write the information directly into main memory.
- Memory mapped I/O—Requires the CPU to reserve space for I/O functions and make use of the address for both memory and I/O devices.
- Port mapped I/O—Uses a special class of instruction that can read and write a single byte to an I/O device.
There is a natural hierarchy to memory and, as such, there must be a way to manage memory and ensure that it does not become corrupted. That is the job of the memory management. Memory management systems on multitasking operating systems are responsible for
- Relocation—Maintains the ability to swap memory contents from memory to secondary storage as needed.
- Protection—Provides control to memory segments and restricts what process can write to memory.
- Sharing—Allows sharing of information based on a user’s level of access; that is, Mike can read the object, whereas Shawn can read and write to the object.
- Logical organization—Provides for the sharing and support for dynamic link libraries.
- Physical organization—Provides for the physical organization of memory.
Let’s now look at storage media.
Storage Media
A computer is not just a CPU; memory is also an important component. The CPU uses memory to store instructions and data. Therefore, memory is an important type of storage media. The CPU is the only device that can directly access memory. Systems are designed that way because the CPU has a high level of system trust. The CPU can use different types of addressing schemes to communicate with memory, which includes absolute addressing and relative addressing. Memory can be addressed either physically or logically. Physical addressing refers to the hard-coded address assigned to the memory. Applications and programmers writing code use logical addresses. Relative addresses use a known address with an offset applied. Not only can memory be addressed in different ways but there are also different types of memory. Memory can be either nonvolatile or volatile. The sections that follow provide examples of both.
RAM
Random access memory (RAM) is volatile memory. If power is lost, the data is destroyed. Types of RAM include static RAM, which uses circuit latches to represent binary data, and dynamic RAM, which must be refreshed every few milliseconds.
Static random access memory (SRAM) doesn’t require a refresh signal as DRAM does. The chips are more complex and are thus more expensive. However, they are faster. DRAM access times come in at 60 nanoseconds (ns) or more; SRAM has access times as fast as 10ns. SRAM is often used for cache memory.
RAM can be configured as Dynamic Random Access Memory (DRAM). Dynamic RAM chips are cheap to manufacture. Dynamic refers to the memory chips’ need for a constant update signal (also called a refresh signal) to keep the information that is written there. Currently, there are four popular implementations of DRAM:
- Synchronous DRAM (SDRAM)—Shares a common clock signal with the transmitter of the data. The computer’s system bus clock provides the common signal that all SDRAM components use for each step to be performed.
- Double Data Rate (DDR)—Supports a double transfer rate of ordinary SDRAM. This obtains twice the transfer rate.
- DDR2—Splits each clock pulse in two, doubling the number of operations it can perform.
- Rambus Direct RAM (RDRAM)—A proprietary synchronous DRAM technology. RDRAM can be found in fewer new systems today than just a few years ago. Rambus is found mainly in gaming consoles and home theater components.
ROM
Read-only memory (ROM) is nonvolatile memory that retains information even if power is removed. ROM is typically used to load and store firmware. Firmware is embedded software much like BIOS.
Some common types of ROM include
- Erasable Programmable Read-Only Memory (EPROM)
- Electrically Erasable Programmable Read-Only Memory (EEPROM)
- Flash Memory
- Programmable Logic Devices (PLD)
Secondary Storage
Although memory plays an important part in the world of storage, other long-term types of storage are also needed. One of these is sequential storage. Anyone who has owned an IBM PC with a tape drive knows what sequential storage is. Tape drives are a type of sequential storage that must be read sequentially from beginning to end. Another well-known type of secondary storage is direct-access storage. Direct access storage devices do not have to be read sequentially; the system can identify the location of the information and go directly to it to read the data. A hard drive is an example of a direct access storage device: A hard drive has a series of platters, read/write heads, motors, and drive electronics contained within a case designed to prevent contamination. Hard drives are used to hold data and software. Software is the operating system or an application that you’ve installed on a computer system. Floppies or diskettes are also considered secondary storage. The data on diskettes are organized in tracks and sectors. Tracks are narrow concentric circles on the disk. Sectors are pie-shaped slices of the disk. The disk is made of a thin plastic material coated with iron oxide. This is much like the material found in a backup tape or cassette tape. As the disk spins, the disk drive heads move in and out to locate the correct track and sector. It then reads or writes the requested track and sector.
Compact disks (CDs) are a type of optical media. They use a laser/opto-electronic sensor combination to read or write data. A CD can be read only, write once, or rewriteable. CDs can hold up to around 800MB on a single disk. A CD is manufactured by applying a thin layer of aluminum to what is primarily hard clear plastic. During manufacturing or whenever a CD/R is burned, small bumps or pits are placed in the surface of the disk. These bumps or pits are what are converted into binary ones or zeros. Unlike the tracks and sectors of a floppy, a CD comprises one long spiral track that begins at the inside of the disk and continues toward the outer edge.
Digital video disks (DVDs) are very similar to a CD because both are optical media—DVDs just hold more data. The next generation of optical storage is the Blu-ray disk. These optical disks can hold 50GB or more of data.
I/O Bus Standards
The data that the CPU is working with must have a way to move from the storage media to the CPU. This is accomplished by means of a bus. The bus is nothing more than lines of conductors that transmit data between the CPU, storage media, and other hardware devices. From the point of view of the CPU, the various adaptors plugged into the computer are external devices. These connectors and the bus architecture used to move data to the devices has changed over time. Some common bus architectures are listed here:
- ISA—The Industry Standard Architecture (ISA) bus started as an 8-bit bus designed for IBM PCs. It is now obsolete.
- PCI—The peripheral component interface (PCI) bus was developed by Intel and served as a replacement for ISA and other bus standards. PCI express is now the current standard.
- SCSI—The small computer systems interface (SCSI) bus allows a variety of devices to be daisy-chained off of a single controller. Many servers use the SCSI bus for their preferred hard drive solution.
Two serial bus standards, USB and FireWire, have also gained wide market share. USB overcame the limitations of traditional serial interfaces. USB 2.0 devices can communicate at speeds up to 480Mbps, whereas USB 3.0 devices have a proposed rate of 4.8Gbps. Devices can be chained together so that up to 127 devices can be chained together. USB is used for flash memory, cameras, printers, external hard drives, and even iPods. Two of the fundamental advantages of the USB are that it has such broad product support and that many devices are immediately recognized when connected. The competing standard for USB is FireWire or IEEE 1394. This design can be found on many Apple computers, but is also found on digital audio and video equipment.
Hardware Cryptographic Components
Hardware offers the ability to build in encryption. A relatively new hardware security device for computers is called the trusted platform module (TPM) chip. The TPM moves the cryptographic processes down to the hardware level and provides a greater level of security than software encryption. A TPM chip can be installed on the motherboard of a client computer and is used for hardware authentication. The TPM authenticates the computer in question rather than the user. TPM uses the boot sequence to determine the trusted status of a platform. TPM is now covered by ISO 11889-1:2009.
The TPM provides the ability for encryption by calculating a hashed value based on items such as the system’s firmware, configuration details, and core components of the operating system’s kernel. At the time of installation, this hash value is securely stored within the TPM chip. This provides attestation. Attestation confirms, authenticates, or proves to be genuine. The TPM is a tamper-proof cryptographic module that can provide a means to report the system configuration to a policy enforcer securely to provide attestation.
Virtual Memory and Virtual Machines
Modern computer systems have developed other ways in which to store and access information. One of these is virtual memory. Virtual memory is the combination of the computer’s primary memory (RAM) and secondary storage (the hard drive). By combining these two technologies, the OS can make the CPU believe that it has much more memory than it actually does. Examples of virtual memory include
- Page file
- Swap space
- Swap partition
These virtual memory types are user-defined in terms of size, location, and so on. When RAM is depleted, the CPU begins saving data onto the computer’s hard drive. Paging takes a part of a program out of memory and uses the page file to swap an entire program out of memory. This process uses a swap file so that the data can be moved back and forth between the hard drive and RAM as needed. A specific drive can even be configured to hold such data and as such is a swap partition. Individuals who have used a computer’s hibernation function or ever opened more programs on their computers than they’ve had enough memory to support are probably familiar with the operation of virtual memory.
Closely related to virtual memory are virtual machines, such asVMWare, VirtualBox, and VirtualPC. VMWare and VirtualPC are the two leading contenders in this category. A virtual machine enables the user to run a second OS within a virtual host. For example, a virtual machine will let you run another Windows OS, Linux x86, or any other OS that runs on x86 processor and supports standard BIOS booting. Virtual systems make use of a hypervisor to manage the virtualized hardware resources to the guest operating system. A Type 1 hypervisor runs directly on the hardware with VM resources provided by the hypervisor, whereas a Type 2 hypervisor runs on a host operating system above the hardware. Virtual machines are a huge trend and can be used for development and system administration, production, and to reduce the number of physical devices needed. The hypervisor is also being used to design virtual switches, routers, and firewalls.
Computer Configurations
The following is a list of some of the most commonly used computer and device configurations:
- Print server—Print servers are usually located close to printers and allow many users to access the printer and share its resources.
- File server—File servers allow users to have a centralized site to store files. This provides an easy way to perform backups because it can be done on one server and not all the client computers. It also allows for group collaboration and multiuser access.
- Program server—Program servers are also known as application servers. This service allows users to run applications not installed on the end users’ system. It is a very popular concept in thin client environments. Thin clients depend on a central server for processing power. Licensing is another important consideration.
- Web server—Web servers provide web services to internal and external users via web pages. A sample web address or URL (uniform resource locator) is http://www.thesolutionfirm.com.
- Database server—Database servers store and access data. This includes information such as product inventory, price lists, customer lists, and employee data. Because databases hold sensitive information, they require well-designed security controls.
- Laptops and tablets—Mobile devices that are easily lost or stolen. Mobile devices have become much more powerful and must be properly secured.
- Smart phones—Gone are the cell phones of the past that simply placed calls and sent SMS texts. Today’s smart phones are more like many computers and have a large amount of processing capability; they can take photos and have onboard storage, Internet connectivity, and the ability to run applications. These devices are of particular concern as more companies start to support bring your own device (BYOD). Such devices can easily fall outside of company policy and controls.
Security Architecture
Although a robust architecture is a good start, real security requires that you have a security architecture in place to control processes and applications. The concepts related to security architecture include the following:
- Protection rings
- Trusted computer base (TCB)
- Open and closed systems
- Security modes
- Recovery procedures
Protection Rings
The operating system knows who and what to trust by relying on rings of protection. Rings of protection work much like your network of family, friends, coworkers, and acquaintances. The people who are closest to you, such as your spouse and family, have the highest level of trust. Those who are distant acquaintances or are unknown to you probably have a lower level of trust. It’s much like the guy you see in New York City on Canal Street trying to sell new Rolex watches for $100; you should have little trust in him and his relationship with the Rolex company!
In reality, the protection rings are conceptual. Figure 5.2 shows an illustration of the protection ring schema. The first implementation of such a system was in MIT’s Multics time-shared operating system.
Figure 5.2. Rings of protection.
The protection ring model provides the operating system with various levels at which to execute code or to restrict that code’s access. The rings provide much greater granularity than a system that just operates in user and privileged mode. As code moves toward the outer bounds of the model, the layer number increases and the level of trust decreases.
- Layer 0—The most trusted level. The operating system kernel resides at this level. Any process running at layer 0 is said to be operating in privileged mode.
- Layer 1—Contains nonprivileged portions of the operating system.
- Layer 2—Where I/O drivers, low-level operations, and utilities reside.
- Layer 3—Where applications and processes operate. This is the level at which individuals usually interact with the operating system. Applications operating here are said to be working in user mode.
Not all systems use all rings. Most systems that are used today operate in two modes: user mode or supervisor (privileged) mode. Items that need high security, such as the operating system security kernel, are located at the center ring. This ring is unique because it has access rights to all domains in that system. Protection rings are part of the trusted computing base concept.
Trusted Computer Base
The trusted computer base (TCB) is the sum of all the protection mechanisms within a computer and is responsible for enforcing the security policy. This includes hardware, software, controls, and processes. The TCB is responsible for confidentiality and integrity. The TCB is the only portion of a system that operates at a high level of trust. The TCB is tasked with enforcing the security policy. It monitors four basic functions:
- Input/output operations—I/O operations are a security concern because operations from the outermost rings might need to interface with rings of greater protection. These cross-domain communications must be monitored.
- Execution domain switching—Applications running in one domain or level of protection often invoke applications or services in other domains. If these requests are to obtain more sensitive data or service, their activity must be controlled.
- Memory protection—To truly be secure, the TCB must monitor memory references to verify confidentiality and integrity in storage.
- Process activation—Registers, process status information, and file access lists are vulnerable to loss of confidentiality in a multiprogramming environment. This type of potentially sensitive information must be protected.
The TCB monitors the functions in the preceding list to ensure that the system operates correctly and adheres to security policy. The TCB follows the reference monitor concept. The reference monitor is an abstract machine that is used to implement security. The reference monitor’s job is to validate access to objects by authorized subjects. The reference monitor operates at the boundary between the trusted and untrusted realm. The reference monitor has three properties:
- Cannot be bypassed and controls all access
- Cannot be altered and is protected from modification or change
- Can be verified and tested to be correct
The reference monitor is much like the bouncer at a club because it stands between each subject and object. Its role is to verify the subject meets the minimum requirements for access to an object, as illustrated in Figure 5.3.
Figure 5.3. Reference monitor.
The reference monitor can be designed to use tokens, capability lists, or labels.
- Tokens—Communicate security attributes before requesting access.
- Capability lists—Offer faster lookup than security tokens but are not as flexible.
- Security labels—Used by high-security systems because labels offer permanence. This is provided only by security labels.
At the heart of the system is the security kernel. The security kernel handles all user/application requests for access to system resources. A small security kernel is easy to verify, test, and validate as secure. However, in real life, the security kernel might be bloated with some unnecessary code because processes located inside can function faster and have privileged access. To avoid these performance costs, Linux and Windows have fairly large security kernels and have opted to sacrifice size in return for performance gains. Figure 5.4 illustrates an example of the design of the Windows OS.
Figure 5.4. Security kernel.
Although the reference monitor is conceptual, the security kernel can be found at the heart of every system. The security kernel is responsible for running the required controls used to enforce functionality and resist known attacks. As mentioned previously, the reference monitor operates at the security perimeter—the boundary between the trusted and untrusted realm. Components outside the security perimeter are not trusted. All control and enforcement mechanisms are inside the security perimeter.
Open and Closed Systems
Open systems accept input from other vendors and are based on standards and practices that allow connection to different devices and interfaces. The goal is to promote full interoperability whereby the system can be fully utilized.
Closed systems are proprietary. They use devices not based on open standards and are generally locked. They lack standard interfaces to allow connection to other devices and interfaces.
An example of this can be seen in the United States cell phone industry. AT&T and T-Mobile cell phones are based on the worldwide Global System for Mobile Communications (GSM) standard and can be used overseas easily on other networks by simply changing the subscriber identity module (SIM). These are open-system phones. Phones that are used on the Sprint network use Code Division Multiple Access (CDMA), which does not have worldwide support.
Security Modes of Operation
Several security modes of operation are based on Department of Defense (DoD 5220.22-M) classification levels as defined at http://www.dtic.mil/whs/directives/corres/html/522022m.htm. Per the DoD, information being processed on a system, and the clearance level of authorized users, have been defined as one of four modes (see Table 5.2):
- Dedicated—A need-to-know for all information stored or processed. Every user requires formal access approvals and to have executed all appropriate nondisclosure agreements for all the information stored and/or processed. This level must also support enforced system access procedures. All hardcopy output and media removed will be handled at the level for which the system is accredited until reviewed by a knowledgeable individual. All users can access all data.
- System High—A need-to-know for some of the information contained within the system. Every user requires access approval and to have signed nondisclosure agreements for all the information stored and/or processed. Access permission to an object by users not already possessing access permission must only be assigned by authorized users of the object. This mode must provide an audit trail capability that records time, date user ID, terminal ID (if applicable), and file name. All users can access some data based on their need to know.
- Compartmented—Valid need-to-know for some of the information on the system. Every user has formal access approval for all information they will access on the system and require proper clearance for the highest level of data classification on the system. All users have signed NDAs for all information they will access on the system. All users can access some data based on their need to know and formal access approval.
- Multilevel—Every user has a valid need-to-know for that information for which he/she is to have access. They have formal access approval and have signed nondisclosure agreements for that information to which he or she is to have access. Mandatory access controls shall provide a means of restricting access to files based on the sensitivity label. All users can access some data based on their need to know, clearance, and formal access approval.
Table 5.2. Security Modes of Operation
Mode |
Dedicated |
System High |
Compartmented |
Multi-Level |
Signed NDA |
All |
All |
All |
All |
Clearance |
All |
All |
All |
Some |
Approval |
All |
All |
Some |
Some |
Need to Know |
All |
Some |
Some |
Some |
Operating States
When systems are used to process and store sensitive information, there must be some agreed-on methods for how this will work. Generally, these concepts were developed to meet the requirements of handling sensitive government information with categories such as sensitive, secret, and top secret. The burden of handling this task can be placed on either administration or the system itself.
Single-state systems are designed and implemented to handle one category of information. The burden of management falls on the administrator who must develop the policy and procedures to manage this system. The administrator must also determine who has access and what type of access the users have. These systems are dedicated to one mode of operation, so they are sometimes referred to as dedicated systems.
Multistate systems depend not on the administrator, but on the system itself. They are capable of having more than one person log in to the system and access various types of data depending upon the level of clearance. As you would probably expect, these systems are not inexpensive. The XTS-400 that runs the Secure Trusted Operating Program (STOP) OS from BAE Systems is an example of a multilevel state computer system. Multistate systems can operate as a compartmentalized system. This means that Mike can log in to the system with a secret clearance and access secret-level data, whereas Carl can log in with top-secret level access and access a different level of data. These systems are compartmentalized and can segment data on a need-to-know basis.
Recovery Procedures
Unfortunately, things don’t always operate normally; they sometimes go wrong and a system failure can occur. A system failure could potentially compromise the system. Efficient designs have built-in recovery procedures to recover from potential problems:
- Fail safe—If a failure is detected, the system is protected from compromise by termination of services.
- Fail soft—A detected failure terminates the noncritical process and the system continues to function.
It is important to be able to recover when an issue arises. This requires taking a proactive approach and backing up all critical files on a regular schedule. The goal of recovery is to recover to a known state. Common issues that require recovery include
- System Reboot—An unexpected/unscheduled event.
- System Restart—Automatically occurs when the system goes down and forces an immediate reboot.
- System Cold Start—Results from a major failure or component replacement.
- System Compromise—Caused by an attack or breach of security.
Process Isolation
Process isolation is required to maintain a high level of system trust. To be certified as a multilevel security system, process isolation must be supported. Without process isolation, there would be no way to prevent one process from spilling over into another process’s memory space, corrupting data, or possibly making the whole system unstable. Process isolation is performed by the operating system; its job is to enforce memory boundaries.
For a system to be secure, the operating system must prevent unauthorized users from accessing areas of the system to which they should not have access. Sometimes this is done by means of a virtual machine. A virtual machine allows users to believe that they have the use of the entire system, but in reality, processes are completely isolated. To take this concept a step further, some systems that require truly robust security also implement hardware isolation. This means that the processes are segmented not only logically but also physically.
Security Models
Security models of control are used to determine how security will be implemented, what subjects can access the system, and what objects they will have access to. Simply stated, they are a way to formalize security policy. Security models of control are typically implemented by enforcing integrity, confidentiality, or other controls. Keep in mind that each of these models lays out broad guidelines and is not specific in nature. It is up to the developer to decide how these models will be used and integrated into specific designs, as shown in Figure 5.5.
Figure 5.5. How security models are used in the design of an OS.
The sections that follow discuss the different security models of control in greater detail. The first three models discussed are considered lower-level models.
State Machine Model
The state machine model is based on a finite state machine, as shown in Figure 5.6. State machines are used to model complex systems and deals with acceptors, recognizers, state variables, and transaction functions. The state machine defines the behavior of a finite number of states, the transitions between those states, and actions that can occur.
Figure 5.6. Finite state model.
The most common representation of a state machine is through a state machine table. For example, as Table 5.3 illustrates, if the state machine is at the current state of (B) and condition (2), the next state would be (C).
Table 5.3. State Machine Table
State Transaction |
State A |
State B |
State C |
Condition 1 |
... |
... |
... |
Condition 2 |
... |
Current State |
... |
Condition 3 |
... |
... |
... |
A state machine model monitors the status of the system to prevent it from slipping into an insecure state. Systems that support the state machine model must have all their possible states examined to verify that all processes are controlled. The state machine concept serves as the basis of many security models. The model is valued for knowing in what state the system will reside. As an example, if the system boots up in a secure state, and every transaction that occurs is secure, it must always be in a secure state and not fail open.
Information Flow Model
The Information Flow model is an extension of the state machine concept and serves as the basis of design for both the Biba and Bell-LaPadula models, which are discussed in the sections that follow. The Information Flow model consists of objects, state transitions, and lattice (flow policy) states. The real goal of the information flow model is to prevent unauthorized, insecure information flow in any direction. This model and others can make use of guards. Guards allow the exchange of data between various systems.
Noninterference Model
The Noninterference model as defined by Goguen and Meseguer was designed to make sure that objects and subjects of different levels don’t interfere with the objects and subjects of other levels. The model uses inputs and outputs of either low or high sensitivity. Each data access attempt is independent of all others and data cannot cross security boundaries.
Confidentiality
Although the preceding models serve as a basis for many security models that were developed later, one major concern is confidentiality. Government entities such as the DoD are concerned about the confidentiality of information. The DoD divides information into categories to ease the burden of managing who has access to what levels of information. DoD information classifications are sensitive but unclassified (BU), confidential, secret, and top secret. One of the first models to address the needs of the DoD was the Bell-LaPadula model.
Bell-LaPadula
The Bell-LaPadula state machine model enforces confidentiality. The Bell-LaPadula model uses mandatory access control to enforce the DoD multilevel security policy. For a subject to access information, he must have a clear need to know and meet or exceed the information’s classification level.
The Bell-LaPadula model is defined by the following properties:
- Simple security property (ss property)—This property states that a subject at one level of confidentiality is not allowed to read information at a higher level of confidentiality. This is sometimes referred to as “no read up.”
- Star * security property—This property states that a subject at one level of confidentiality is not allowed to write information to a lower level of confidentiality. This is also known as “no write down.”
- Strong star * property—This property states that a subject cannot read/write to object of higher/lower sensitivity.
Although the Bell-LaPadula model did go a long way in defining the operation of secure systems, the model is not perfect. It did not address security issues such as covert channels. It was designed in an era when mainframes were the dominant platform. It was designed for multilevel security and takes only confidentiality into account.
Integrity
Integrity is a good thing. It is one of the basic elements of the security triad along with confidentiality and availability. Integrity plays an important role in security because it can verify that unauthorized users are not modifying data, authorized users don’t make unauthorized changes, and that databases balance and data remains internally and externally consistent. Although governmental entities are typically very concerned with confidentiality, other organizations might be more focused on the integrity of information. In general, integrity has four goals:
- Prevent data modification by unauthorized parties
- Prevent unauthorized data modification by authorized parties
- Must reflect the real world
- Must maintain internal and external consistency
Two security models that address secure systems for the aspect of integrity include Biba and Clark-Wilson. Both of these models are addressed next.
Biba
The Biba model was the first model developed to address the concerns of integrity. Originally published in 1977, this lattice-based model has the following defining properties:
- Simple integrity property—This property states that a subject at one level of integrity is not permitted to read an object of lower integrity.
- Star * integrity property—This property states that an object at one level of integrity is not permitted to write to an object of higher integrity.
- Invocation property—This property prohibits a subject at one level of integrity from invoking a subject at a higher level of integrity.
Biba addresses only the first goal of integrity—protecting the system for access by unauthorized users. Availability and confidentiality are not examined. It also assumes that internal threats are being protected by good coding practices, and therefore focuses on external threats.
Clark-Wilson
The Clark-Wilson model was created in 1987. It differs from previous models because it was developed with the intention to be used for commercial activities. This model addresses all the goals of integrity. Clark Wilson dictates that the separation of duties must be enforced, subjects must access data through an application, and auditing is required. Some terms associated with Clark Wilson include
- User
- Transformation procedure
- Unconstrained data item
- Constrained data item
- Integrity verification procedure
Clark-Wilson features an access control triple. The access control triple is composed of the user, transformational procedure, and the constrained data item. It was designed to protect integrity and prevent fraud. Authorized users cannot change data in an inappropriate way. It also differs from the Biba model in that subjects are restricted. This means a subject at one level of access can read one set of data, whereas a subject at another level of access has access to a different set of data. Clark-Wilson controls the way in which subjects access objects so that the internal consistency of the system can be ensured and that data can be manipulated only in ways that protect consistency. Integrity verification procedures (IVPs) ensure that a data item is in a valid state. Data cannot be tampered with while being changed and the integrity of the data must be consistent. Clark-Wilson requires that all changes must be logged. Clark-Wilson is made up of transformation procedures (TP). Constrained data items (CDI) are data for which integrity must be preserved. Items not covered under the model are considered unconstrained data items (UDIs).
Take-Grant Model
The Take-Grant model is another confidentiality-based model that supports four basic operations: take, grant, create, and revoke. This model allows subjects with the take right to remove take rights from other subjects. Subjects possessing the grant right can grant this right to other subjects. The create and revoke operations work in the same manner: Someone with the create right can give the create right to others and those with the revoke right can remove that right from others.
Brewer and Nash Model
The Brewer and Nash model is similar to the Bell-LaPadula model and is also called the Chinese Wall model. It was developed to prevent conflict of interest (COI) problems. As an example, imagine that your security firm does security work for many large firms. If one of your employees could access information about all the firms that your company has worked for, he might be able to use this data in an unauthorized way. Therefore, the Chinese Wall model is more context oriented in that it prevents a worker consulting for one firm from accessing data belonging to another, thereby preventing any COI.
Other Models
A security model defines and describes what protection mechanisms are to be used and what these controls are designed to achieve. Although the previous section covered some of the more heavily tested models, you should have a basic understanding of a few more. These security models include
- Graham Denning model—This model uses a formal set of protection rules for which each object has an owner and a controller.
- Harrison-Ruzzo-Ullman model—This model details how subjects and objects can be created, deleted, accessed, or changed.
- Lattice model—This model is associated with MAC. Controls are applied to objects and the model uses security levels that are represented by a lattice structure. This structure governs information flow. Subjects of the lattice model are allowed to access an object only if the security level of the subject is equal to or greater than that of the object. Every subset has a least upper bound and a greatest lower bound.
Documents and Guidelines
The documents and guidelines discussed in the following sections were developed to help evaluate and establish system assurance. These items are important to the CISSP candidate because they provide a level of trust and assurance that these systems will operate in a given and predictable manner. A trusted system has undergone testing and validation to a specific standard. Assurance is the freedom of doubt and a level of confidence that a system will perform as required every time it is used. This system can be used by all. When a developer prepares to sell a system, he must have a way to measure the system’s features and abilities. The buyer, when preparing to make a purchase, must have a way to measure the system’s effectiveness and benchmark its abilities. The following documents and guidelines facilitate these needs.
The Rainbow Series
The rainbow series is aptly named because each book in the series has a label of a different color. This 6-foot-tall stack of books was developed by the National Computer Security Center (NCSC), an organization that is part of the National Security Agency (NSA). These guidelines were developed for the Trusted Product Evaluation Program (TPEP), which tests commercial products against a comprehensive set of security-related criteria. The first of these books was released in 1983 and is known as Trusted Computer System Evaluation Criteria (TCSEC) or the Orange Book. Because it addresses only standalone systems, other volumes were developed to increase the level of system assurance.
The Orange Book: Trusted Computer System Evaluation Criteria
The Orange Book’s official name is the Trusted Computer System Evaluation Criteria. As noted, it was developed to evaluate standalone systems. Its basis of measurement is confidentiality, so it is similar to the Bell-LaPadula model. It is designed to rate systems and place them into one of four categories:
- A—Verified protection. An A-rated system is the highest security division.
- B—Mandatory security. A B-rated system has mandatory protection of the TCB.
- C—Discretionary protection. A C-rated system provides discretionary protection of the TCB.
- D—Minimal protection. A D-rated system fails to meet any of the standards of A, B, or C and basically has no security controls.
The Orange Book not only rates systems into one of four categories, but each category is also broken down further. For each of these categories, a higher number indicates a more secure system, as noted in the following:
A is the highest security division. An A1 rating means that the system has verified protection and supports mandatory access control (MAC).
- A1 is the highest supported rating. Systems rated as such must meet formal methods and proof of integrity of TCB. An A1 system must not only be developed under strict guidelines but must also be installed and delivered securely. Examples of A1 systems include the Gemini Trusted Network Processor and the Honeywell SCOMP.
B is considered a mandatory protection design. Just as with an A-rated system, those that obtain a B rating must support MAC.
- B1 (labeled security protection) systems require sensitivity labels for all subjects and storage objects. Examples of B1-rated systems include the Cray Research Trusted Unicos 8.0 and the Digital SEVMS.
- For a B2 (structured protection) rating, the system must meet the requirements of B1 and support hierarchical device labels, trusted path communications between user and system, and covert storage analysis. An example of a B2 system is the Honeywell Multics.
- Systems rated as B3 (security domains) must meet B2 standards and support trusted path access and authentication, automatic security analysis, and trusted recovery. B3 systems must address covert timing vulnerabilities. A B3 system must not only support security controls during operation but also be secure during startup. An example of a B3-rated system is the Federal XTS-300.
C is considered a discretionary protection rating. C-rated systems support discretionary access control (DAC).
- Systems rated at C1 (discretionary security protection) don’t need to distinguish between individual users and types of access.
- C2 (controlled access protection) systems must meet C1 requirements plus must distinguish between individual users and types of access by means of strict login controls. C2 systems must also support object reuse protection. A C2 rating is common; products such as Windows NT and Novell NetWare 4.11 have a C2 rating.
- Any system that does not comply with any of the other categories or that fails to receive a higher classification is rated as a D-level (minimal protection) system. MS-DOS is a D-rated system.
Although the Orange Book is no longer considered current, it was one of the first standards. It is reasonable to expect that the exam might ask you about Orange Book levels and functions at each level. Listed in Table 5.4 are important notes to keep in mind about Orange Book levels.
Table 5.4. Orange Book Levels
Level |
Items to Remember |
A1 |
Built, installed, and delivered in a secure manner |
B1 |
Security labels (MAC) |
B2 |
Security labels and verification of no covert channels (MAC |
B3 |
Security labels, verification of no covert channels, and must stay secure during startup (MAC) |
C1 |
Weak protection mechanisms (DAC |
C2 |
Strict login procedures (DAC) |
D1 |
Failed or was not tested |
The Red Book: Trusted Network Interpretation
The Red Book’s official name is the Trusted Network Interpretation (TNI). The purpose of the TNI is to examine security for network and network components. Whereas the Orange Book addresses only confidentiality, the Red Book examines integrity and availability. It also is tasked with examining the operation of networked devices. Three areas of reviews of the Red Book include
- DoS prevention—Management and continuity of operations.
- Compromise protection—Data and traffic confidentiality, selective routing.
- Communications integrity—Authentication, integrity, and nonrepudiation.
Information Technology Security Evaluation Criteria
ITSEC is a European standard developed in the 1980s to evaluate confidentiality, integrity, and availability of an entire system. ITSEC was unique in that it was the first standard to unify markets and bring all of Europe under one set of guidelines. ITSEC designates the target system as the Target of Evaluation (TOE). The evaluation is actually divided into two parts: One part evaluates functionality and the other evaluates assurance. There are 10 functionality (F) classes and 7 assurance (E) classes. Assurance classes rate the effectiveness and correctness of a system. Table 5.5 shows these ratings and how they correspond to the TCSEC ratings.
Table 5.5. ITSEC Functionality Ratings and Comparison to TCSEC
(F) Class |
(E) Class |
TCSEC Rating |
NA |
E0 |
D |
F1 |
E1 |
C1 |
F2 |
E2 |
C2 |
F3 |
E3 |
B1 |
F4 |
E4 |
B2 |
F5 |
E5 |
B3 |
F5 |
E6 |
A1 |
F6 |
– |
TOEs with high integrity requirements |
F7 |
– |
TOEs with high availability requirements |
F8 |
– |
TOEs with high integrity requirements during data communications |
F9 |
– |
TOEs with high confidentiality requirements during data communications |
F10 |
– |
Networks with high confidentiality and integrity requirements |
Common Criteria
With all the standards we have discussed, it is easy to see how someone might have a hard time determining which one is the right choice. The International Standards Organization (ISO) had these same thoughts; therefore, it decided that because of the various standards and ratings that existed, there should be a single global standard. Figure 5.7 illustrates the development of Common Criteria.
Figure 5.7. Common Criteria development.
In 1997, the ISO released the Common Criteria (ISO 15408), which is an amalgamated version of TCSEC, ITSEC, and the CTCPEC. Common Criteria is designed around TCB entities. These entities include physical and logical controls, startup and recovery, reference mediation, and privileged states. Common Criteria categorizes assurance into one of seven increasingly strict levels of assurance. These are referred to as Evaluation Assurance Levels (EALs). EALs provide a specific level of confidence in the security functions of the system being analyzed. The system being analyzed and tested is known as the Target of Evaluation (TOE), which is just another name for the system being subjected to the security evaluation. A description of each of the seven levels of assurance follows:
- EAL 0—Inadequate assurance
- EAL 1—Functionality tested
- EAL 2—Structurally tested
- EAL 3—Methodically checked and tested
- EAL 4—Methodically designed, tested, and reviewed
- EAL 5—Semiformally designed and tested
- EAL 6—Semiformally verified designed and tested
- EAL 7—Formally verified designed and tested
Common Criteria defines two types of security requirements: functional and assurance. Functional requirements define what a product or system does. They also define the security capabilities of a product. The assurance requirements and specifications to be used as the basis for evaluation are known as the Security Target (ST). A protection profile defines the system and its controls. The protection profile is divided into the following five sections:
- Rationale
- Evaluation assurance requirements
- Descriptive elements
- Functional requirements
- Development assurance requirements
Assurance requirements define how well a product is built. Assurance requirements give confidence in the product and show the correctness of its implementation.
System Validation
No system or architecture will ever be completely secure; there will always be a certain level of risk. Security professionals must understand this risk and be comfortable with it, mitigate it, or offset it to a third party. All the documentation and guidelines already discussed dealt with ways to measure and assess risk. These can be a big help in ensuring that the implemented systems meet our requirements. However, before we begin to use the systems, we must complete the two additional steps of certification and accreditation.
Certification and Accreditation
Certification is the process of validating that implemented systems are configured and operating as expected. It also validates that the systems are connected to and communicate with other systems in a secure and controlled manner, and that they handle data in a secure and approved manner. The certification process is a technical evaluation of the system that can be carried out by independent security teams or by the existing staff. Its goal is to uncover any vulnerabilities or weaknesses in the implementation.
The results of the certification process are reported to the organization’s management for mediation and approval. If management agrees with the findings of the certification, the report is formally approved. The formal approval of the certification is the accreditation process. Management usually issues this in a formal, written approval that the certified system is approved for use and specified in the certification documentation. If changes are made to the system, it is reconfigured; if there are other changes in the environment, a recertification and accreditation process must be repeated. The entire process is periodically repeated in intervals depending on the industry and the regulations they must comply with. As an example, Section 404 of Sarbanes-Oxley requires an annual evaluation of internal systems that deal with financial controls and reporting systems.
Governance and Enterprise Architecture
Information security governance requires more than certification and accreditation. Governance should focus on the availability of services, integrity of information, and protection of data confidentiality. The Internet and global connectivity extend the company’s network far beyond its traditional border. This places new demands on information security and its governance. Attacks can originate from not just inside the organization, but anywhere in the world. Failure to adequately address this important concern can have serious consequences.
Security and governance can be enhanced by implementing an enterprise architecture (EA) plan. The EA is the practice within information technology of organizing and documenting a company’s IT assets to enhance planning, management, and expansion. The primary purpose of using EA is to ensure that business strategy and IT investments are aligned. The benefit of EA is that it provides a means of traceability that extends from the highest level of business strategy down to the fundamental technology. EA has grown since first developed it in the 1980s; companies such as Intel, BP, and the United States government now use this methodology. One early EA model is the Zachman Framework. It was designed to allow companies to structure policy documents for information systems, so they focus on Who, What, Where, When, Why, and How, as shown in Figure 5.8.
Figure 5.8. Zachman model.
Federal law requires government agencies to set up EAs and a structure for its governance. This process is guided by the Federal Enterprise Architecture (FEA) reference model. The FEA is designed to use five models:
- Performance reference model—A framework used to measure performance of major IT investments.
- Business reference model—A framework used to provide an organized, hierarchical model for day-to-day business operations.
- Service component reference model—A framework used to classify service components with respect to how they support business or performance objectives.
- Technical reference model—A framework used to categorize the standards, specifications, and technologies that support and enable the delivery of service components and capabilities.
- Data reference model—A framework used to provide a standard means by which data can be described, categorized, and shared.
An independently designed, but later integrated, subset of the Zachman Framework is the Sherwood Applied Business Security Architecture (SABSA). Like the Zachman Framework, this model and methodology was developed for risk-driven enterprise information security architectures. It asks what, why, how, and where. More information on the SABSA model is at http://www.sabsa-institute.org/.
The British Standard (BS) 7799 was developed in England to be used as a standard method to measure risk. Because the document found such a wide audience and was adopted by businesses and organizations, it evolved into ISO 17799 and then later was used in the development of ISO 27005.
ISO 17799 is a code of practice for information security. ISO 17799 is written for individuals responsible for initiating, implementing, or maintaining information security management systems. Its goal is to help protect confidentiality, integrity, and availability. Compliance with 17799 is an involved task and is far from trivial for even the most security conscious organizations. ISO 17799 provides best-practice guidance on information security management and is divided into 12 main sections:
- Risk assessment and treatment
- Security policy
- Organization of information security
- Asset management
- Human resources security
- Physical and environmental security
- Communications and operations management
- Access control
- Information systems acquisition, development, and maintenance
- Information security incident management
- Business continuity management
- Compliance
The ISO 27000 is part of a family of standards that can trace its origins back to BS7799. Organizations can become ISO 27000 certified by verifying their compliance to an accredited testing entity. Some of the core ISO standards include the following:
- 27001—This document describes requirements on how to establish, implement, operate, monitor, review, and maintain an information security management system (ISMS). It follows a Plan-Do-Check-Act model.
- 27002—This document was originally the BS7799 standard, then was republished as an ISO 17799 standard. It also describes ways to develop a security program within the organization.
- 27003—This document focuses on implementation.
- 27004—This document describes the ways to measure the effectiveness of the information security program.
- 27005—This document describes risk management.
You can find out more about this standard by visiting the ISO/IEC 27002:2005 website (http://tinyurl.com/8feso97).
One final item worth mentioning is the information technology infrastructure library (ITIL). ITIL provides a framework for identifying, planning, delivering, and supporting IT services for the business. ITIL presents a service lifecycle that includes
- Continual service improvement
- Service strategy
- Service design
- Service transition
- Service operation
True security is a layered process. Each of the items discussed in this section can be used to build a more secure organization.
Security Architecture Threats
Just as in most other chapters of this book, this one also reviews potential threats and vulnerabilities. Anytime a security professional makes the case for stronger security, there will be those that ask why such funds should be spent. It’s important to point out not only the benefits of good security, but also the potential risks of not implementing good practices and procedures. We live in a world of risk. As security professionals, we need to be aware of these threats to security and understand how the various protection mechanisms discussed throughout this chapter can be used to raise the level of security. Doing this can help build real defense in depth.
Buffer Overflow
Buffer overflows occur because of poor coding techniques. A buffer is a temporary storage area that has been coded to hold a certain amount of data. If additional data is fed to the buffer, it can spill over or overflow to adjacent buffers. This can corrupt those buffers and cause the application to crash or possibly allow an attacker to execute his own code that he has loaded onto the stack. Ideally, programs should be written to check that you cannot stuff 32 characters into a 24-character buffer; however, this type of error checking does not always occur. Error checking is really nothing more than making sure that buffers receive the type and amount of information required. Here is an example buffer overflow:
#include <stdio.h> #include <stdlib.h> #include <string.h> int abc() { char buffer[8]; strcpy(buffer, "AAAAAAAAAA"; return 0; }
For example, in 2010, the Aurora exploit was developed to cause a buffer overflow against Windows XP systems running Internet Explorer. As a result of the attack, attackers could take control of the client system and execute commands remotely.
The point here is that the programmer’s work should always be checked for good security practices. Due diligence is required to prevent buffer flows. Continuous coder training is key here to keep abreast of ongoing threats and a changing landscape. All data being passed to a program should be checked to make sure that it matches the correct parameters. Defenses for buffer overflows include code reviews, using safe programming languages, and applying patches and updates in a timely manner.
Back Doors
Back doors are another potential threat to the security of systems and software. Back doors, which are also sometimes referred to as maintenance hooks, are used by programmers during development to allow easy access to a piece of software. Often these back doors are undocumented. A back door can be used when software is developed in sections and developers want a means of accessing certain parts of the program without having to run through all the code. If back doors are not removed before the release of the software, they can allow an attacker to bypass security mechanisms and access the program.
Asynchronous Attacks
Asynchronous attacks are a form of attack that typically targets timing. The objective is to exploit the delay between the time of check (TOC) and the time of use (TOU). These attacks are sometimes called race conditions because the attacker races to make a change to the object after it has been changed but before the system uses it.
As an example, if a program creates a date file to hold the amount a customer owes and the attacker can race to replace this value before the program reads it, he can successfully manipulate the program. In reality, it can be difficult to exploit a race condition because a hacker might have to attempt to exploit the race condition many times before succeeding.
Covert Channels
Covert channels are a means of moving information in a manner in which it was not intended. Covert channels are a favorite of attackers because they know that you cannot deny what you must permit. The term was originally used in TCSEC documentation to refer to ways of transferring information from a higher classification to a lower classification. Covert channel attacks can be broadly separated into two types:
- Covert timing channel attacks—Timing attacks are difficult to detect and function by altering a component or by modifying resource timing.
- Covert storage channel attacks—These attacks use one process to write data to a storage area and another process to read the data.
Here is an example of how covert channel attacks happen in real life. Your organization has decided to allow ping (Internet Control Message Protocol [ICMP]) traffic into and out of your network. Based on this knowledge, an attacker has planted the Loki program on your network. Loki uses the payload portion of the ping packet to move data into and out of your network. Therefore, the network administrator sees nothing but normal ping traffic and is not alerted, even though the attacker is busy stealing company secrets. Sadly, many programs can perform this type of attack.
Incremental Attacks
The goal of an incremental attack is to make a change slowly over time. By making such a small change over such a long period, an attacker hopes to remain undetected. Two primary incremental attacks include data diddling, which is possible if the attacker has access to the system and can make small incremental changes to data or files and a salami attack, which is similar to data diddling but involves making small changes to financial accounts or records.
Exam Prep Questions
Which of the following best describes a superscalar processor?
A.
A superscalar processor can execute only one instruction at a time.
B.
A superscalar processor has two large caches that are used as input and output buffers.
C.
A superscalar processor can execute multiple instructions at the same time.
D.
A superscalar processor has two large caches that are used as output buffers.
Which of the following are developed by programmers and used to allow the bypassing of normal processes during development but are left in the software when it ships to the customer?
A.
Back doors
B.
Traps
C.
Buffer overflows
D.
Covert channels
Carl has noticed a high level of TCP traffic in and out of the network. After running a packet sniffer, he discovered malformed TCP ACK packets with unauthorized data. What has Carl discovered?
A.
Buffer overflow attack
B.
Asynchronous attack
C.
Covert channel attack
D.
DoS attack
Which of the following types of CPUs can perform multiple operations from a single instruction?
A.
DITSCAP
B.
RISC
C.
NIACAP
D.
CISC
Which of the following standards evaluates functionality and assurance separately?
A.
TCSEC
B.
TNI
C.
ITSEC
D.
CTCPEC
Which of the following was the first model developed that was based on confidentiality?
A.
Bell-LaPadula
B.
Biba
C.
Clark-Wilson
D.
Take-Grant
Which of the following models is integrity based and was developed for commercial applications?
A.
Information Flow
B.
Clark-Wilson
C.
Bell-LaPadula
D.
Brewer-Nash
Which of the following does the Biba model address?
A.
Focuses on internal threats
B.
Focuses on external threats
C.
Addresses confidentiality
D.
Addresses availability
Which model is also known as the Chinese Wall model?
A.
Biba
B.
Take-Grant
C.
Harrison-Ruzzo-Ullman
D.
Brewer-Nash
Which of the following examines integrity and availability?
A.
Orange Book
B.
Brown Book
C.
Red Book
D.
Purple Book
What is the purpose of the *-property in the Bell-LaPadula model?
A.
No read up
B.
No write up
C.
No read down
D.
No write down
What is the purpose of the simple integrity property of the Biba model?
A.
No read up
B.
No write up
C.
No read down
D.
No write down
Which of the following can be used to connect different MAC systems together?
A.
Labels
B.
Reference model
C.
Controls
D.
Guards
Which of the following security modes of operation best describes when a user has a valid need to know all data?
A.
Dedicated
B.
System High
C.
Compartmented
D.
Multilevel
Which of the following security models make use of the TLC concept?
A.
Biba
B.
Clark Wilson
C.
Bell-LaPadula
D.
Brewer Nash
Answers to Exam Prep Questions
- C. A superscalar processor can execute multiple instructions at the same time. Answer A describes a scalar processor; it can execute only one instruction at a time. Answer B does not describe a superscalar processor because it does not have two large caches that are used as input and output buffers. Answer D is incorrect because a superscalar processor does not have two large caches that are used as output buffers.
- A. Back doors, also referred to as maintenance hooks, are used by programmers during development to give them easy access into a piece of software. Answer B is incorrect because a trap is a message used by the Simple Network Management Protocol (SNMP) to report a serious condition to a management station. Answer C is incorrect because a buffer overflow occurs because of poor programming. Answer D is incorrect because a covert channel is a means of moving information in a manner in which it was not intended.
- C. A covert channel is a means of moving information in a manner in which it was not intended. A buffer overflow occurs because of poor programming and usually results in program failure or the attacker’s ability to execute his code; thus, answer A is incorrect. An asynchronous attack deals with performing an operation between the TOC and the TOU (so answer B is incorrect), whereas a DoS attack affects availability not confidentiality (making answer D incorrect).
- D. The Complex Instruction Set Computing (CISC) CPU can perform multiple operations from a single instruction. Answer A is incorrect because DITSCAP is the Defense Information Technology Systems Certification and Accreditation Process. Answer B describes the Reduced Instruction Set Computing (RISC) CPU which uses simple instructions that require a reduced number of clock cycles. Answer C is incorrect because NIACAP is the National Information Assurance Certification and Accreditation Process, an accreditation process.
- C. ITSEC is a European standard that evaluates functionality and assurance separately. All other answers are incorrect because they do not separate the evaluation criteria. TCSEC is also known as the Orange Book, TNI is known as the Red Book, and CTCPEC is a Canadian assurance standard; therefore, answers A, B, and D are incorrect.
- A. Bell-LaPadula was the first model developed that is based on confidentiality. It uses two main rules to enforce its operation. Answers B, C, and D are incorrect. Biba and Clark-Wilson both deal with integrity, whereas the Take-Grant model is based on four basic operations.
- B. Clark-Wilson was developed for commercial activities. This model dictates that the separation of duties must be enforced, subjects must access data through an application, and auditing is required. Answers A, C, and D are incorrect. The Information Flow model addresses the flow of information and can be used to protect integrity or confidentiality. Bell-LaPadula is an integrity model and Brewer-Nash was developed to prevent conflict of interest.
- B. The Biba model assumes that internal threats are being protected by good coding practices and, therefore, focuses on external threats. Answers A, C, and D are incorrect. Biba addresses only integrity not availability or confidentiality.
- D. The Brewer-Nash model is also known as the Chinese Wall model and was specifically developed to prevent conflicts of interest. Answers A, B, and C are incorrect because they do not fit the description. Biba is integrity based, Take-Grant is based on four modes, and Harrison-Ruzzo-Ullman defines how access rights can be changed, created, or deleted.
- C. The Red Book examines integrity and availability of networked components. Answer A is incorrect because the Orange Book deals with confidentiality. Answer B is incorrect because the Brown Book is a guide to understanding trusted facility management. Answer D is incorrect because the Purple Book deals with database management.
- D. The *-property enforces “no write down” and is used to prevent someone with high clearance from writing data to a lower classification. Answers A, B, and C do not properly describe the Bell-LaPadula model star property.
- C. The purpose of the simple integrity property of the Biba model is to prevent someone to read an object of lower integrity. This helps protect the integrity of sensitive information.
- D. A guard is used to connect various MAC systems together and allow for communication between these systems. Answer A is incorrect because labels are associated with MAC systems but are not used to connect them together. Answer B is incorrect because the reference monitor is associated with the TCB. Answer C is incorrect because the term controls here is simply a distracter.
- A. Out of the four modes listed, only dedicated supports a valid need to know for all information on the system. Therefore, answers B, C, and D are incorrect.
- B. The Clark Wilson model was designed to support the goals of integrity and is focused on TLC, which stands for tampered, logged, and consistent. Answers A, C, and D are incorrect; Biba, Bell-LaPadula, and Brewer Nash are not associated with TLC.
Need to Know More?
Protection rings: www.multicians.org/protection.pdf
Common Criteria: http://www.niap-ccevs.org/cc-scheme/
Smashing the stack for fun and profit: http://insecure.org/stf/smashstack.html
Covert-channel attacks: http://www.cyberguard.com/download/white_paper/en_cg_covert_channels.pdf
Java security: http://java.sun.com/javase/technologies/security/
How Windows measures up to TCSEC standards: http://technet.microsoft.com/en-us/library/cc767092.aspx
The Rainbow Series: http://csrc.nist.gov/publications/secpubs/rainbow/
The Bell-LaPadula model: http://www.computing.dcu.ie/~davids/courses/CA548/C_I_Policies.pdf
ISO 17799: http://www.iso.org/iso/home.htm