SSCP Exam Questions

Total 1048 Questions

Last Updated Exam : 27-Dec-2024

Topic 1: Access Control

Which of the following would be true about Static password tokens?


A.

The owner identity is authenticated by the token




B.

The owner will never be authenticated by the token.


C.

The owner will authenticate himself to the system.


D.

The token does not authenticates the token owner but the system





A.
  

The owner identity is authenticated by the token





Password Tokens
Tokens are electronic devices or cards that supply a user's password for them. A token
system can be used to supply either a static or a dynamic password. There is a big
difference between the static and dynamic systems, a static system will normally log a user
in but a dynamic system the user will often have to log themselves in.
Static Password Tokens:
The owner identity is authenticated by the token. This is done by the person who issues the
token to the owner (normally the employer). The owner of the token is now authenticated
by "something you have". The token authenticates the identity of the owner to the
information system. An example of this occurring is when an employee swipes his or her
smart card over an electronic lock to gain access to a store room.
Synchronous Dynamic Password Tokens:
This system is a lot more complex then the static token password. The synchronous
dynamic password tokens generate new passwords at certain time intervals that are
synched with the main system. The password is generated on a small device similar to a
pager or a calculator that can often be attached to the user's key ring. Each password is
only valid for a certain time period, typing in the wrong password in the wrong time period will invalidate the authentication. The time factor can also be the systems downfall. If a
clock on the system or the password token device becomes out of synch, a user can have
troubles authenticating themselves to the system.
Asynchronous Dynamic Password Tokens:
The clock synching problem is eliminated with asynchronous dynamic password tokens.
This system works on the same principal as the synchronous one but it does not have a
time frame. A lot of big companies use this system especially for employee's who may work
from home on the companies VPN (Virtual private Network).
Challenge Response Tokens:
This is an interesting system. A user will be sent special "challenge" strings at either
random or timed intervals. The user inputs this challenge string into their token device and
the device will respond by generating a challenge response. The user then types this
response into the system and if it is correct they are authenticated. Reference(s) used for this question:
http://www.informit.com/guides/content.aspx?g=security&seqNum=146
and KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten
Domains of Computer Security, 2001, John Wiley & Sons, Page 37.

Which of the following statements pertaining to access control is false?


A.

Users should only access data on a need-to-know basis.


.


B.

If access is not explicitly denied, it should be implicitly allowed.


C.

 Access rights should be granted based on the level of trust a company has on a subject.


D.

Roles can be an efficient way to assign rights to a type of user who performs certain
tasks





B.
  

If access is not explicitly denied, it should be implicitly allowed.



Access control mechanisms should default to no access to provide the
necessary level of security and ensure that no security holes go unnoticed. If access is not
explicitly allowed, it should be implicitly denied.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-
Hill/Osborne, 2002, Chapter 4: Access Control (page 143).

Which of the following is the most reliable authentication method for remote access?


A.

Variable callback system




B.

Synchronous token


C.

Fixed callback system


D.

 Combination of callback and caller ID





B.
  

Synchronous token



A Synchronous token generates a one-time password that is only valid for a
short period of time. Once the password is used it is no longer valid, and it expires if not
entered in the acceptable time frame.  The following answers are incorrect:
Variable callback system. Although variable callback systems are more flexible than fixed
callback systems, the system assumes the identity of the individual unless two-factor
authentication is also implemented. By itself, this method might allow an attacker access as
a trusted user.
Fixed callback system. Authentication provides assurance that someone or something is
who or what he/it is supposed to be. Callback systems authenticate a person, but anyone
can pretend to be that person. They are tied to a specific place and phone number, which
can be spoofed by implementing call-forwarding Combination of callback and Caller ID. The caller ID and callback functionality provides
greater confidence and auditability of the caller's identity. By disconnecting and calling back
only authorized phone numbers, the system has a greater confidence in the location of the
call. However, unless combined with strong authentication, any individual at the location
could obtain access.
The following reference(s) were/was used to create this question:
Shon Harris AIO v3 p. 140, 548
ISC2 OIG 2007 p. 152-153, 126-127

Which access control model is also called Non Discretionary Access Control (NDAC)?


A.

 Lattice based access control




B.

 Mandatory access control


C.

Role-based access control


D.

 Label-based access control





C.
  

Role-based access control



RBAC is sometimes also called non-discretionary access control (NDAC) (as
Ferraiolo says "to distinguish it from the policy-based specifics of MAC"). Another model
that fits within the NDAC category is Rule-Based Access Control (RuBAC or RBAC). Most
of the CISSP books use the same acronym for both models but NIST tend to use a
lowercase "u" in between R and B to differentiate the two models You can certainly mimic MAC using RBAC but true MAC makes use of Labels which
contains the sensitivity of the objects and the categories they belong to. No labels means
MAC is not being used.
One of the most fundamental data access control decisions an organization must make is
the amount of control it will give system and data owners to specify the level of access
users of that data will have. In every organization there is a balancing point between the
access controls enforced by organization and system policy and the ability for information
owners to determine who can have access based on specific business requirements. The
process of translating that balance into a workable access control model can be defined by
three general access frameworks: Discretionary access control
Mandatory access control
Nondiscretionary access control
A role-based access control (RBAC) model bases the access control authorizations on the
roles (or functions) that the user is assigned within an organization. The determination of
what roles have access to a resource can be governed by the owner of the data, as with
DACs, or applied based on policy, as with MACs.
Access control decisions are based on job function, previously defined and governed by
policy, and each role (job function) will have its own access capabilities. Objects associated
with a role will inherit privileges assigned to that role. This is also true for groups of users,
allowing administrators to simplify access control strategies by assigning users to groups
and groups to roles.
There are several approaches to RBAC. As with many system controls, there are variations
on how they can be applied within a computer system. There are four basic RBAC architectures:
1. Non-RBAC: Non-RBAC is simply a user-granted access to data or an application by
traditional mapping, such as with ACLs. There are no formal “roles” associated with the
mappings, other than any identified by the particular user.
2. Limited RBAC: Limited RBAC is achieved when users are mapped to roles within a
single application rather than through an organization-wide role structure. Users in a limited
RBAC system are also able to access non-RBAC-based applications or data. For example,
a user may be assigned to multiple roles within several applications and, in addition, have direct access to another application or system independent of his or her assigned role. The
key attribute of limited RBAC is that the role for that user is defined within an application
and not necessarily based on the user’s organizational job function.
3. Hybrid RBAC: Hybrid RBAC introduces the use of a role that is applied to multiple
applications or systems based on a user’s specific role within the organization. That role is
then applied to applications or systems that subscribe to the organization’s role-based
model. However, as the term “hybrid” suggests, there are instances where the subject may
also be assigned to roles defined solely within specific applications, complimenting (or,
perhaps, contradicting) the larger, more encompassing organizational role used by other
systems. 4. Full RBAC: Full RBAC systems are controlled by roles defined by the organization’s
policy and access control infrastructure and then applied to applications and systems
across the enterprise. The applications, systems, and associated data apply permissions
based on that enterprise definition, and not one defined by a specific application or system.
Be careful not to try to make MAC and DAC opposites of each other - they are two
different access control strategies with RBAC being a third strategy that was defined later
to address some of the limitations of MAC and DAC.
The other answers are not correct because:
Mandatory access control is incorrect because though it is by definition not discretionary, it
is not called "non-discretionary access control." MAC makes use of label to indicate the
sensitivity of the object and it also makes use of categories to implement the need to know. Label-based access control is incorrect because this is not a name for a type of access
control but simply a bogus detractor.
Lattice based access control is not adequate either. A lattice is a series of levels and a
subject will be granted an upper and lower bound within the series of levels. These levels
could be sensitivity levels or they could be confidentiality levels or they could be integrity
levels.
Reference(s) used for this question:
All in One, third edition, page 165.
Ferraiolo, D., Kuhn, D. & Chandramouli, R. (2003). Role-Based Access Control, p. 18.
Ferraiolo, D., Kuhn, D. (1992). Role-Based Access Controls. http://csrc.nist.gov/rbac/Role_Based_Access_Control-1992.html
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition :
Access Control ((ISC)2 Press) (Kindle Locations 1557-1584). Auerbach Publications.
Kindle Edition.
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition :
Access Control ((ISC)2 Press) (Kindle Locations 1474-1477). Auerbach Publications.
Kindle Edition.

Which TCSEC level is labeled Controlled Access Protection?


A.

 C1




B.

 C2


C.

C3


D.

 B1





B.
  

 C2



C2 is labeled Controlled Access Protection.
The TCSEC defines four divisions: D, C, B and A where division A has the highest security.
Each division represents a significant difference in the trust an individual or organization
can place on the evaluated system. Additionally divisions C, B and A are broken into a
series of hierarchical subdivisions called classes: C1, C2, B1, B2, B3 and A1.
Each division and class expands or modifies as indicated the requirements of the
immediately prior division or class.
D — Minimal protection
Reserved for those systems that have been evaluated but that fail to meet the
requirements for a higher division
C — Discretionary protection
C1 —Discretionary Security Protection                                                                                                                   Identification and authentication
Separation of users and data
Discretionary Access Control (DAC) capable of enforcing access limitations on an
individual basis
Required System Documentation and user manuals
C2 — Controlled Access Protection
More finely grained DAC
Individual accountability through login procedures
Audit trails
Object reuse
Resource isolation
B — Mandatory protection B1 — Labeled Security Protection
Informal statement of the security policy model
Data sensitivity labels
Mandatory Access Control (MAC) over selected subjects and objects
Label exportation capabilities
All discovered flaws must be removed or otherwise mitigated
Design specifications and verification
B2 — Structured Protection
Security policy model clearly defined and formally documented
DAC and MAC enforcement extended to all subjects and objects
Covert storage channels are analyzed for occurrence and bandwidth
Carefully structured into protection-critical and non-protection-critical elements
Design and implementation enable more comprehensive testing and review
Authentication mechanisms are strengthened
Trusted facility management is provided with administrator and operator segregation
Strict configuration management controls are imposed
B3 — Security Domains
Satisfies reference monitor requirements
Structured to exclude code not essential to security policy enforcement
Significant system engineering directed toward minimizing complexity
Security administrator role defined
Audit security-relevant events
Automated imminent intrusion detection, notification, and response
Trusted system recovery procedures
Covert timing channels are analyzed for occurrence and bandwidth
An example of such a system is the XTS-300, a precursor to the XTS-400 A — Verified protection
A1 — Verified Design
Functionally identical to B3
Formal design and verification techniques including a formal top-level specification
Formal management and distribution procedures
An example of such a system is Honeywell's Secure Communications Processor SCOMP,
a precursor to the XTS-400
Beyond A1
System Architecture demonstrates that the requirements of self-protection and
completeness for reference monitors have been implemented in the Trusted Computing
Base (TCB).
Security Testing automatically generates test-case from the formal top-level specification or
formal lower-level specifications.
Formal Specification and Verification is where the TCB is verified down to the source code
level, using formal verification methods where feasible.
Trusted Design Environment is where the TCB is designed in a trusted facility with only   trusted (cleared) personnel.
The following are incorrect answers:
C1 is Discretionary security
C3 does not exists, it is only a detractor
B1 is called Labeled Security Protection.
Reference(s) used for this question:
HARE, Chris, Security management Practices CISSP Open Study Guide, version 1.0, april
1999.
and AIOv4 Security Architecture and Design (pages 357 - 361)
AIOv5 Security Architecture and Design (pages 358 - 362)


Technical controls such as encryption and access control can be built into the operating
system, be software applications, or can be supplemental hardware/software units. Such
controls, also known as logical controls, represent which pairing?


A.

 Preventive/Administrative Pairing




B.

Preventive/Technical Pairing


C.

Preventive/Physical Pairing


D.

Detective/Technical Pairing





B.
  

Preventive/Technical Pairing



Preventive/Technical controls are also known as logical controls and can be
built into the operating system, be software applications, or can be supplemental
hardware/software units.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the
Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 34.

Rule-Based Access Control (RuBAC) access is determined by rules. Such rules would fit
within what category of access control ?


A.

 Discretionary Access Control (DAC)




B.

Mandatory Access control (MAC)


C.

 Non-Discretionary Access Control (NDAC)


D.

Lattice-based Access control





C.
  

 Non-Discretionary Access Control (NDAC)



Rule-based access control is a type of non-discretionary access control
because this access is determined by rules and the subject does not decide what those
rules will be, the rules are uniformly applied to ALL of the users or subjects.
In general, all access control policies other than DAC are grouped in the category of nondiscretionary
access control (NDAC). As the name implies, policies in this category have
rules that are not established at the discretion of the user. Non-discretionary policies
establish controls that cannot be changed by users, but only through administrative action.
Both Role Based Access Control (RBAC) and Rule Based Access Control (RuBAC) fall
within Non Discretionary Access Control (NDAC). If it is not DAC or MAC then it is most  likely NDAC.
IT IS NOT ALWAYS BLACK OR WHITE
The different access control models are not totally exclusive of each others. MAC is making
use of Rules to be implemented. However with MAC you have requirements above and
beyond having simple access rules. The subject would get formal approval from
management, the subject must have the proper security clearance, objects must have
labels/sensitivity levels attached to them, subjects must have the proper security clearance.
If all of this is in place then you have MAC.
BELOW YOU HAVE A DESCRIPTION OF THE DIFFERENT CATEGORIES:
MAC = Mandatory Access Control
Under a mandatory access control environment, the system or security administrator will
define what permissions subjects have on objects. The administrator does not dictate
user’s access but simply configure the proper level of access as dictated by the Data
Owner.
The MAC system will look at the Security Clearance of the subject and compare it with the
object sensitivity level or classification level. This is what is called the dominance
relationship.
The subject must DOMINATE the object sensitivity level. Which means that the subject
must have a security clearance equal or higher than the object he is attempting to access.
MAC also introduce the concept of labels. Every objects will have a label attached to them
indicating the classification of the object as well as categories that are used to impose the
need to know (NTK) principle. Even thou a user has a security clearance of Secret it does
not mean he would be able to access any Secret documents within the system. He would
be allowed to access only Secret document for which he has a Need To Know, formal
approval, and object where the user belong to one of the categories attached to the object. If there is no clearance and no labels then IT IS NOT Mandatory Access Control.
Many of the other models can mimic MAC but none of them have labels and a dominance
relationship so they are NOT in the MAC category.
NISTR-7316 Says:
Usually a labeling mechanism and a set of interfaces are used to determine access based
on the MAC policy; for example, a user who is running a process at the Secret classification should not be allowed to read a file with a label of Top Secret. This is known
as the “simple security rule,” or “no read up.” Conversely, a user who is running a process
with a label of Secret should not be allowed to write to a file with a label of Confidential.
This rule is called the “*-property” (pronounced “star property”) or “no write down.” The *-
property is required to maintain system security in an automated environment. A variation
on this rule called the “strict *-property” requires that information can be written at, but not
above, the subject’s clearance level. Multilevel security models such as the Bell-La Padula
Confidentiality and Biba Integrity models are used to formally specify this kind of MAC
policy. DAC = Discretionary Access Control
DAC is also known as: Identity Based access control system.
The owner of an object is define as the person who created the object. As such the owner
has the discretion to grant access to other users on the network. Access will be granted
based solely on the identity of those users.
Such system is good for low level of security. One of the major problem is the fact that a
user who has access to someone's else file can further share the file with other users
without the knowledge or permission of the owner of the file. Very quickly this could
become the wild wild west as there is no control on the dissimination of the information.
RBAC = Role Based Access Control RBAC is a form of Non-Discretionary access control.
Role Based access control usually maps directly with the different types of jobs performed
by employees within a company.
For example there might be 5 security administrator within your company. Instead of
creating each of their profile one by one, you would simply create a role and assign the
administrators to the role. Once an administrator has been assigned to a role, he will
IMPLICITLY inherit the permissions of that role.
RBAC is great tool for environment where there is a a large rotation of employees on a
daily basis such as a very large help desk for example.
RBAC or RuBAC = Rule Based Access Control
RuBAC is a form of Non-Discretionary access control. A good example of a Rule Based access control device would be a Firewall. A single set of
rules is imposed to all users attempting to connect through the firewall.
NOTE FROM CLEMENT:
Lot of people tend to confuse MAC and Rule Based Access Control.
Mandatory Access Control must make use of LABELS. If there is only rules and no label, it
cannot be Mandatory Access Control. This is why they call it Non Discretionary Access
control (NDAC).
There are even books out there that are WRONG on this subject. Books are sometimes
opiniated and not strictly based on facts.
In MAC subjects must have clearance to access sensitive objects. Objects have labels that
contain the classification to indicate the sensitivity of the object and the label also has
categories to enforce the need to know.
Today the best example of rule based access control would be a firewall. All rules are
imposed globally to any user attempting to connect through the device. This is NOT the
case with MAC.
I strongly recommend you read carefully the following document:
NISTIR-7316 at http://csrc.nist.gov/publications/nistir/7316/NISTIR-7316.pdf It is one of the best Access Control Study document to prepare for the exam. Usually I tell
people not to worry about the hundreds of NIST documents and other reference. This
document is an exception. Take some time to read it.
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten
Domains of Computer Security, 2001, John Wiley & Sons, Page 33.
and
NISTIR-7316 at http://csrc.nist.gov/publications/nistir/7316/NISTIR-7316.pdf
and
Conrad, Eric; Misenar, Seth; Feldman, Joshua (2012-09-01). CISSP Study Guide (Kindle
Locations 651-652). Elsevier Science (reference). Kindle Edition.

Access control is the collection of mechanisms that permits managers of a system to
exercise a directing or restraining influence over the behavior, use, and content of a
system. It does not permit management to:


A.

A. specify what users can do




B.

specify which resources they can access


C.

Specify how to restrain hackers


D.

D. specify what operations they can perform on a system.





C.
  

Specify how to restrain hackers



Access control is the collection of mechanisms that permits managers of a
system to exercise a directing or restraining influence over the behavior, use, and content
of a system. It permits management to specify what users can do, which resources they
can access, and what operations they can perform on a system. Specifying HOW to
restrain hackers is not directly linked to access control.
Source: DUPUIS, Clement, Access Control Systems and Methodology, Version 1, May
2002, CISSP Open Study Group Study Guide for Domain 1, Page 12.

What is considered the most important type of error to avoid for a biometric access control
system?


A.

 Type I Error




B.

 Type II Error


C.

Combined Error Rate


D.

Crossover Error Rate





B.
  

 Type II Error



When a biometric system is used for access control, the most important error
is the false accept or false acceptance rate, or Type II error, where the system would
accept an impostor.
A Type I error is known as the false reject or false rejection rate and is not as important in
the security context as a type II error rate. A type one is when a valid company employee is
rejected by the system and he cannot get access even thou it is a valid user.  The Crossover Error Rate (CER) is the point at which the false rejection rate equals the
false acceptance rate if your would create a graph of Type I and Type II errors. The lower
the CER the better the device would be.
The Combined Error Rate is a distracter and does not exist.
Source: TIPTON, Harold F. & KRAUSE, Micki, Information Security Management
Handbook, 4th edition (volume 1), 2000, CRC Press, Chapter 1, Biometric Identification
(page 10).

Almost all types of detection permit a system's sensitivity to be increased or decreased
during an inspection process. If the system's sensitivity is increased, such as in a biometric
authentication system, the system becomes increasingly selective and has the possibility of
generating:


A.

 Lower False Rejection Rate (FRR)


 


B.

 Higher False Rejection Rate (FRR)


C.

 Higher False Acceptance Rate (FAR)


D.

It will not affect either FAR or FRR





B.
  

 Higher False Rejection Rate (FRR)



Almost all types of detection permit a system's sensitivity to be increased or
decreased during an inspection process. If the system's sensitivity is increased, such as in
a biometric authentication system, the system becomes increasingly selective and has a
higher False Rejection Rate (FRR).
Conversely, if the sensitivity is decreased, the False Acceptance Rate (FRR) will increase.
Thus, to have a valid measure of the system performance, the Cross Over Error (CER) rate
is used. The Crossover Error Rate (CER) is the point at which the false rejection rates and
the false acceptance rates are equal. The lower the value of the CER, the more accurate
the system.
There are three categories of biometric accuracy measurement (all represented as
percentages): False Reject Rate (a Type I Error): When authorized users are falsely rejected as
unidentified or unverified.
False Accept Rate (a Type II Error): When unauthorized persons or imposters are falsely
accepted as authentic.
Crossover Error Rate (CER): The point at which the false rejection rates and the false
acceptance rates are equal. The smaller the value of the CER, the more accurate the
system.
NOTE:
Within the ISC2 book they make use of the term Accept or Acceptance and also Reject or
Rejection when referring to the type of errors within biometrics. Below we make use of
Acceptance and Rejection throughout the text for conistency. However, on the real exam
you could see either of the terms.
Performance of biometrics   Different metrics can be used to rate the performance of a biometric factor, solution or
application. The most common performance metrics are the False Acceptance Rate FAR
and the False Rejection Rate FRR.
When using a biometric application for the first time the user needs to enroll to the system.
The system requests fingerprints, a voice recording or another biometric factor from the
operator, this input is registered in the database as a template which is linked internally to a
user ID. The next time when the user wants to authenticate or identify himself, the
biometric input provided by the user is compared to the template(s) in the database by a
matching algorithm which responds with acceptance (match) or rejection (no match).
FAR and FRR
The FAR or False Acceptance rate is the probability that the system incorrectly authorizes
a non-authorized person, due to incorrectly matching the biometric input with a valid
template. The FAR is normally expressed as a percentage, following the FAR definition this
is the percentage of invalid inputs which are incorrectly accepted. The FRR or False Rejection Rate is the probability that the system incorrectly rejects
access to an authorized person, due to failing to match the biometric input provided by the
user with a stored template. The FRR is normally expressed as a percentage, following the
FRR definition this is the percentage of valid inputs which are incorrectly rejected.
FAR and FRR are very much dependent on the biometric factor that is used and on the
technical implementation of the biometric solution. Furthermore the FRR is strongly person dependent, a personal FRR can be determined for each individual.
Take this into account when determining the FRR of a biometric solution, one person is
insufficient to establish an overall FRR for a solution. Also FRR might increase due to
environmental conditions or incorrect use, for example when using dirty fingers on a
fingerprint reader. Mostly the FRR lowers when a user gains more experience in how to
use the biometric device or software.
FAR and FRR are key metrics for biometric solutions, some biometric devices or software
even allow to tune them so that the system more quickly matches or rejects. Both FRR and
FAR are important, but for most applications one of them is considered most important.
Two examples to illustrate this:
When biometrics are used for logical or physical access control, the objective of the
application is to disallow access to unauthorized individuals under all circumstances. It is clear that a very low FAR is needed for such an application, even if it comes at the price of
a higher FRR.
When surveillance cameras are used to screen a crowd of people for missing children, the
objective of the application is to identify any missing children that come up on the screen.
When the identification of those children is automated using a face recognition software,
this software has to be set up with a low FRR. As such a higher number of matches will be
false positives, but these can be reviewed quickly by surveillance personnel.
False Acceptance Rate is also called False Match Rate, and False Rejection Rate is
sometimes referred to as False Non-Match Rate.
crossover error rate
clear that a very low FAR is needed for such an application, even if it comes at the price of
a higher FRR.
When surveillance cameras are used to screen a crowd of people for missing children, the
objective of the application is to identify any missing children that come up on the screen.
When the identification of those children is automated using a face recognition software,
this software has to be set up with a low FRR. As such a higher number of matches will be
false positives, but these can be reviewed quickly by surveillance personnel.
False Acceptance Rate is also called False Match Rate, and False Rejection Rate is
sometimes referred to as False Non-Match Rate.
crossover error rate

  Above see a graphical representation of FAR and FRR errors on a graph, indicating the
CER
CER
The Crossover Error Rate or CER is illustrated on the graph above. It is the rate where
both FAR and FRR are equal.
The matching algorithm in a biometric software or device uses a (configurable) threshold
which determines how close to a template the input must be for it to be considered a
match. This threshold value is in some cases referred to as sensitivity, it is marked on the X
axis of the plot. When you reduce this threshold there will be more false acceptance errors
(higher FAR) and less false rejection errors (lower FRR), a higher threshold will lead to
lower FAR and higher FRR.
Speed
Most manufacturers of biometric devices and softwares can give clear numbers on the time
it takes to enroll as well on the time for an individual to be authenticated or identified using
their application. If speed is important then take your time to consider this, 5 seconds might   seem a short time on paper or when testing a device but if hundreds of people will use the
device multiple times a day the cumulative loss of time might be significant.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third
Edition ((ISC)2 Press) (Kindle Locations 2723-2731). Auerbach Publications. Kindle
Edition.
and
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten
Domains of Computer Security, 2001, John Wiley & Sons, Page 37.
and
http://www.biometric-solutions.com/index.php?story=performance_biometrics                 


Page 9 out of 105 Pages
Previous