Ameller2015

From nfrwiki
Jump to: navigation, search

A survey on quality attributes in service-based systems (Ameller2015)

Reference Information
Ameller, D., Galster, M., Avgeriou, P. and Franch, X. “A survey on quality attributes in service-based systems”, Software Quality Journal, Kluwer Academic Publishers, 2015
Study Type Survey
System Type Service-based Systems
Attributes
Paper does not define QA, but mapped all QAs stated by participants to QAs for SBSs as defined by the S-Cube quality model (Gehlert and Metzger 2009). So, I get definitions inside s-cube.
Dependability
Dependability covers all properties of a software system to assure that it delivers a certain service reliably. Consequently, the dependability quality attribute covers other quality attributes such as availability, reliability, fault-tolerance, recoverability and maturity.
Performance
Performance describes the timeliness aspects of the software systems behaviour. It, therefore, includes quality attributes such as latency, throughput and turn-around. Performance may also be used to describe the utilisation of resources in order to provide a particu- lar service (e.g.memory and CPU usage). The latter performance aspects are refined by efficiency and demand.
Security
covers the capability of a software system to protect entities (e.g.data) and to values determine a processing rate. protect the access to resources (e.g.printers). Security can be refined to access-control and protection.
  • Access Control: specifies the control policy used for the access to services (e.g.security levels).
  • Protection: describes more generally the methods used to grant access to a service and the probability of a control break
Reusability
-
Interoperability
Interact with one or more specified systems.
Data-related
In specific application domains, services do not only accept input parameters but also input data and they may also produce output data. For example, a credit card service can accept as input a data file describing the user’s credit card information and can produce as output a data file describing details of the transaction executed based on the functionality of the service. These input/output data are characterized by quality attributes that have been traditionally used in the information and data quality domains like accuracy and timeliness [24]. Except from traditional data quality attributes, we have added two more attributes that characterize the way the service behaves with respect to the data it operates on or produces when it fails (data policy) and the degree of validity of the data (data integrity [24]).
Configuration Management
Grids attempt to provide a stable and predictable environment for their users and therefore many have a minimum software specification a compute or storage node must conform to. For example, [104] documents the components required for a resource to become part of the UK National Grid Service. Users are informed of any changes to components through mailing lists and on central websites well in advance (typically weeks ahead) of the work being carried out [58]. When changes are carried out the upgrades and modifications are documented in “detailed change control logs” [58]. Thus, the presence of clear change management procedures on a Grid is an indication of how stable the environment many be. However, given the number of nodes a Grid may contain it can be the case that some computers may be out-of-step with the most recent version of an application of library when it is upgraded. Thus, many production Grids have a set of confirmation scripts and tests which can be automatically and periodically run against each node to determine its current configuration and alert users and administrators a node does not meet the current specification. These tests can not only determine if a Grid middleware component has the correct version but also that it is correctly configured, running and capable of being used. The results of such tests can be seen at [95] and [94], for example.