BlindSense : An Accessibility-inclusive Universal User Interface for Blind People

A large number of blind people use smartphone-based assistive technology to perform their common activities. In order to provide a better user experience the existing user interface paradigm needs to be revisited. A simplified, semantically consistent, and blind-friendly adaptive user interface model is proposed. The proposed solution is evaluated through an empirical study on 63 blind people leveraging an improved user experience in performing common activities on a smartphone. Keywords-adaptive UI; blind people; smartphone; blindfriendly


INTRODUCTION
A large number of blind people are using state-of-the-art assistive technologies for performing their daily life activities [1][2][3].Smartphone-based assistive technologies are an emerging trend for the blind people due to the inbuilt features such as accessibility, usability, and enhanced interactions [4,5].The accessibility services such as talking back, gesture controls, haptic feedback, screen magnifier, large text, color contrast, inverted colors, screen brightness, shortcuts, and virtual assistants are facilitating blind and visually impaired people in performing several operations on the smartphones.However, existing smartphone-based interfaces are posed to several issues in delivering a unified, usable and adaptive solution to the blind people.The navigational complexity in the interface design, lack of consistency in the button, icons, layout screens, identifying, selecting non-visual items on the screen, and traditional input mechanism is contributing in an increased cognitive overload [6].In addition, every mobile application has its specific flow of interaction, placement of non-visual items, layout preferences, and distinct functionality.Notably, it is difficult to establish a balance between accessibility and usability of a mobile application.In most of the cases, the mobile apps are either accessible but barely usable or usable but barely accessible [7].Nowadays, the available mobile apps are mostly inaccessible to the blind people.This is because these apps are either having limited usability or do not adhere to the web/mobile accessibility guidelines [8].The usability of the smartphone-based user interface can be improved by using adaptive user interface paradigms.Adaptive user interfaces support context-awareness and can generate a new instance of the interface by changes in the environment, user preferences, and device usage [9].This can help blind people in the personalization of their smartphone-based user interface (UI) layouts, widgets, and UI controls of a particular application, irrespective of their technical ability, skillset, and device handling capabilities.To gain a considerably improved blindfriendly interface design requires an extensive revision, in order to meet the requirements and needs of the blind people.This may require a technical framework supported by an adaptation mechanism to address the diverse user capabilities, needs, and context-of-use to ensure a high degree of usability and acceptability [10].
This paper aims to devise a universal UI design for blind people, to customize the interface components of the commonly available mobile applications into a blind-friendly simplified UI.This will provide a simplified, semantically consistent, easy to use interface for operating commonly available mobile apps on the smartphone.In addition, the blind people will be having better control over the interface customization, and re-organization of interface elements as per their requirements.The vital contribution of this paper is to improve the user experience of the blind people.The upcoming section provides an overview of related work pertains to accessibility-inclusive user interfaces design.

II. RELATED WORK
Through a series of studies, researchers have analyzed and identified recommendations for the accessibility-inclusive UIs for blind people [11,12].The emergence of smartphone-based UIs has opened new vistas for visually impaired and blind people.However, the cost of usability, accessibility, and the challenge of how to make this device more usable to the blind people emerges [13].The pre-touchscreen era witnessed that the mobile device possessed physical controls for navigation and operational usage.However, existing touchscreen interfaces are vibrant to a number of issues due to the nonexistence of physical buttons, and user interface controls making it insufficient to drive these devices [14,15].Some common usability issues are demonstrated in Table I.  [12,[23][24][25][26][27] Placing non-visual items on the screen, locating and identifying a particular item of interest are key issues.Remembering user action status, and following a pattern of activities is a challenge.Searching and retrieval of particular information is difficult activity to perform as well.

www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People
Task, Dialog, Presentation, User Semantic lost, Navigational complexity, Task adequacy Keys with multiple functions [28] Lack of physical keys on soft keypad resulting in higher chances of wrong touches.
Besides, many actions are associated with one key, which creates confusion for these people.Usually they are unaware of the type of functionality is associated with a particular key Task, Presentation, User Semantic lost, Task adequacy Automated Assistance [29] Automated assistance tools receive information proactively without a user request.Extensive utilization of such assistance systems may burden the blind people.
User, Platform Semantic lost, Task adequacy, Cognitive overload Haptic feedback [30] The utilization of haptic feedbacks and gesture controls are an emerging issue for blind people, e.g., consistent and appropriate feedback at the right time is inadequate in the existing interfaces.
Task, Dialog, Presentation, User Task adequacy, Dimensional trade-off User control over interface components (UI Adaptation) [31] Inadequate UI flexibility and limited control over UI personalization is a key issue.Besides, every mobile application provides a meaningful entry and exit paths/points and should accommodate users requirements, and allow the user to customize interface layouts, and manipulate non-visual objects directly.

Task, Dialog, Presentation, User
Semantic lost, Navigational complexity, Task adequacy, Dimensional trade-off Device incompatibility [32] The final user interface generated on different devices acts inversely.This final generated outcome does not offer interoperability with different operating systems, and devices.Certain applications require pre-installed libraries, and utilities to operate rationally.Cross-mobile and cross-platform support is a primary aspect lacking in the currently available interfaces.[31].
Existing interfaces have limited persistency and consistency due to which it is difficult for the blind people to remember every action on the screen.

Task, Presentation
Semantic lost, Navigational complexity, Task adequacy Learnability and discoverability of the UIs [12] Learnability and discoverability are the key challenges in currently available applications.Discoverability is the time factor and ease by which the user can begin an effective interaction with the system.

Task, Presentation, User
Semantic lost, Navigational complexity,Task adequacy Inadequate mapping of feedbacks [10] Though, the first generation of haptic feedbacks is available in the form of vibratory motors, but still, this can provide a limited sensation in operating smartphones for blind people Task, Dialog, Presentation, User Semantic lost, Task adequacy, Dimensional trade-off Exhaustive Text-Entry [12] Typical keypad, inadequate labels, smaller UI elements and text-to-speech responses in text-entry reduces the efficiency of the blind people.The error rate and a number of missed-touches in using traditional keypads are usually high.

Task, Presentation
Semantic lost, Task adequacy Screen Orientation, Size, Resolution [17] The usability of the touchscreen interfaces is affected by the screen elements such as the size of screen and change of orientation.The small size of button and UI elements have to divest effect on the performance.Screen orientation also leads to increase the difficulty level of these people in learnability and discoverability.
Task, Dialog, Presentation, User Semantic lost, Navigational complexity, Task adequacy, Dimensional trade-off, Device independence User Model fragmentation [13,17] Every application has preserved several models locally.Each application store and retrieve model information from the local repository ensuring the reuse of application models.However, heterogeneous application models may reflect a partial view of the user behavior and application usage in a particular scenario.The inclusion of accessibility in performing daily tasks through different applications and systems is highlighted in [14,[16][17][18][19]. Screen readers and built-in accessibility services have considerably improved the usability of device footprints for blind people [20].The preliminary focus was the ability to interact with the smartphones in performing common tasks such as reading a text message, identifying objects of interest and colors [4].The advent of touchscreen technology has replaced the physical controls, elements, and directional anchors, resulting in creating difficulties in several operations.However, the tactical feedback, haptics, and multimodal interaction triggered a better basis for visual/auditory interactions [17,21].Besides, the touchscreen offers a number of challenging opportunities such as haptic feedback, gesture control systems [22], text-to-speech system, and screen reading accessibility services (e.g., Talkback for Android, Voice Over for Apple) enables blind people to read out the contents of the screen and operate smartphone interfaces [23].Blind people usually avoid contents that develop accessibility problems for them [24].Even sighted people consume 66% of their time in editing and correcting text in an automatic speech recognizer output on the desktop system [25].Besides the above reported issues, Table I is depicting usability issues faced by blind people in performing various activities on smartphones.These problems are identified and analyzed in the specific context of HCI model [26] including task, domain, dialog, presentation, platform, and user model.
In summary, the usability of touchscreen UIs merits further investigation.This requires the revamping of existing UIs based on the needs and expectations of the blind people.Many researchers have now emphasized on the development of a user-adaptive paradigm of designing simple to use, accessible, and user-friendly interfaces based on the guidelines of HCI [14,[27][28][29].In addition, few studies proposed usable and accessibility-inclusive UIs.However, the results need further improvement.Researchers should consider the improvement in the accessibility, usability, technical, and operational effectiveness of the smartphone-based UIs for blind people.From the findings of literature review in the area of humancomputer interaction, usability, accessibility and diversified requirements of the blind people, we come up with a universal accessibility framework on smartphone UIs for blind people.The proposed framework is designed keeping in view the related work mentioned in Table I.The proposed BlindSense, a universal UI design is discussed in the next section.

III. BLIND-FRIENDLY UNIVERSAL USER INTERFACE DESIGN
The technical abilities and tasks involved in the design of smartphone-based blind-friendly UI have been analyzed in the above section.We have analyzed common mobile applications by capturing the details of the nature of the app, category, total number of activities, number of inputs, number of outputs, number of UI controls used in the application, context of use, and minimal feature set.These common applications include SMS, Call, Contacts, Email, Skype, WhatsApp, Facebook, Twitter, Calendar, Location, Clock, Reminders, Reading Books, Reading Documents, Identifying Products, Reading News, Weather, Instagram, and Chrome.However, these applications have been designed for sighted people thus, a number of activities and sub-activities are either redundant, repetitive or having complex navigational structure, or need a long route to follow.The minimal feature sets were extracted through manual usability heuristics.The information is reported in Table II.Thus, suggesting a minimal set of activities, input, outputs and contents in performing common applications have been outlined prior to the design of our proposed architecture.
The proposed BlindSense, a simplified, consistent, usable, adaptive universal UI model is based on user preferences, device logging, and context of use.The novel contribution is to customize/generate an optimal interface extracted from the existing common applications user interface controls, layouts, user interfaces, and widgets.This automatic transformation will be relying on semantic web technologies to model, and transform user interfaces resulting in transforming the complicated design of the existing mobile applications into blind-friendly and simplified UI.The BlindSense is a pluggable layer-based architecture promoting openness and flexibility in the technical design of the system.The designers or users can define their screen layouts, text-entry plug-ins, adaptation rules, templates, themes, and mode/pattern of interactions.The proposed architecture is illustrated in Figure 1.The architecture details are provided below.

A. User Interface Layer
The UI layer serves as an interaction point between the smartphone and blind people.The BlindSense application presents a wizard to the blind people to customize their UI.The user inputs are captured through text-entry, gesture controls and voice commands for personalization and other operations.The application transforms the features extracted from Common Element Set (CES) to Minimal Feature Set (MFS) through the process of abstraction and adaptation.CES describes features of the UI, layouts, themes, widgets etc.The system automatically extracts MES from the user preferences, device logging history, the context of use and the environment.These feature-sets are deployed at the activity or application level depending on a number of I/O of UI elements.For instance, a number of elements, input/outputs in a activity or an application level as reported in Table II.Universal User Interface Architecture

B. Transformation Layer
This layer ensures the delivery of a simplified and personalized UI representing user, impairment, accessibility, devices, UI components, and adaptation models.Adaptation knowledge base contains a set of personalization and adaptation rules.The input from User Information Model (UIM) and the context model are processed on this layer which results in the generation of the simplified UI.A user model may contain static (such as screen partitions) or dynamic (such as level of abstraction) information.The UIM consists of the following profiles: user capability, interest (interest level: high, low, and medium, interest category: computer, sports, entertainment, food, reading etc.), education, health, impairment profiles (visually impaired, blind, deaf-blind, and motor-impaired etc.), emergency, and social profiles.Adaptation manager is consisting of classes representing information related to several models for UI adaptation.Adaptation manager retrieves abstraction levels, and adaptation rules related to a specific disability from the adaptation repository.Abstraction mechanism can be applied to elements, group elements, presentation, and application level.For instance, adaptation rule related to the sequence generation of action and activities would be performed at the task and domain level.The final UI will be generated using Android XML layouts.In case of changes in the user preferences, adaptation components are updated with the latest information retrieved from user profile ontology and a new instance of the UI is generated

C. Context Layer
This layer captures, and stores information pertain to device, environment, user, and context through context extractor.The context model is composed of the user, platform, and environment models.The user model describes the needs and preferences while the platform model provides information related to device and platform, including screen resolution size, screen divisions, button size, keypads, aspect ratio, etc.The environment model represents information specific to the location of the user point of interaction, the level of ambient light etc.However, selective context sensing is performed in our case.Similarly, the light and noise sensing are not required all the time for continuous updating in the UI.This can be setup once, be stored and retrieved anytime.The smartphone sensors store data in the form of key-value pairs, nested structure and in the formal ontology.A user context model containing information about context provider, context property, and context status is generated at the end.Context data extractor filters the context data in relevance to UI adaptation.

D. Semantic Layer
The insight into the deeper aspect of UI adaptation involves the handling of model information, context-awareness, and their associated semantics.This layer encapsulates access to a comprehensive UI adaptation ontology used for user profiling and preferences, adaptation, context, devices and accessibility ontologies.It provides technology-independent access to metadata encoded in the ontology.Additional contents may also be associated with the activities and tasks related to UI modeling, e.g.multimedia captions, audio descriptions, and interpretation of several other patterns.The architecture is developed using re-configurable modular approach for realizing the inclusion of semantic web technologies.

E. Storage Layer
The storage layer is responsible for managing several storage sources including ontologies, data store etc.Information about user profiling, preferences, contextual data, adaptation rules, and layouts details is stored and retrieved from this layer, once all required data has articulated from the relevant models.BlindSense uses semantic reasoning capabilities of the ontological modeling to present a final UI to blind people.The user may change his/her preferences related to layout, theme, and interaction's type in the runtime.The structural model of universal UI model is represented in state transition diagram (STD) in Figure 2.

www.etasr.com Khan et al.: BlindSense: An Accessibility-inclusive Universal User Interface for Blind People
All these states are stored in the system and are executed in a specific order.The diagram begins with START, where it waits for input.Once the user provides a particular input, other processes are initiated by switching several states to complete an activity or perform a particular action.In case of error, the system is returned to the initial state, and the error is recorded in the system memory accordingly.

F. Perspective Workflow
BlindSense can be used as accessibility service or as an individual application.By enabling accessibility service for the first time, the system loads specific installed applications of common use and extracts common element features.User starts personalizing the layout by selecting a few preferences about screen divisions, mode of interactions, etc.Also, the device logging, context of use, and user profiling are automatically articulated.The rules used for the generation of simplified UI are checked in the adaptation repository where specific rules for adaptation are applied.In case of non-availability of specific rules for given transformation, the transformation is set to be default or baseline specification.The complete application simplification process is illustrated in Figure 3.The prototype is developed using Android SDK.BlindSense proof-of-concept

IV. AIMS AND HYPOTHESES
We studied the user experience by analyzing user satisfaction in performing several activities on the proposed universal UI design.Each participant has demonstrated his perceived usefulness, ease of use, system usability scale, and user experience.To the best of our knowledge, a similar universal UI design for blind people has not presented before.Thus, the aim was to investigate the effect of user experience in using common applications on a smartphone using universal UI design to gain a systematic understanding of user experience in overall operations.We aimed at formulating an assumption of which variables are the most central to user's experience.Following hypotheses were made:  H1: The perception of perceived usefulness in performing common activities through a universal UI for blind people in terms of the success of solving tasks/activities on smartphone influences a positive user satisfaction.
 H2: The ease of use in personalizing a universal UI for blind people in term of task completion, personalization, and a number of accurate touches will improve the user experience.
 H3: Consistency will lead to a more positive attitude towards an improved user experience in accessing nonvisual items, and skipping irrelevant items on a universal UI for blind people  H4: Improved system usability scale will lead to a more positive attitude towards the use of universal UI for blind people.
 H5: Consistency in the interface elements will lead to a more positive attitude towards the ease of use in accessing and operating a universal UI for blind people on a smartphone.
Besides we will analyze whether a specific usability parameter was most influential in the particular case or otherwise.The key variables include user satisfaction, perceived usefulness, ease of use.System usability scale will be predicated on having a positive/negative influence on blind people's user experience.

V. EVALUATION AND RESULTS
The evaluation of the proposed solution was conducted through an empirical study.The usability of the proposed UI design and the individual components of the architecture were evaluated using already established methods, metrics, and usability parameters related to HCI.We were interested to find the user experience of blind people by performing a number of tasks associated with user interface customization and operating smartphone applications with the ease of use.

A. Participation
Sixty three (4 females, 59 male) participants took part in this study.The median age of participants was 39 years, within range of 22-56 years.In the pre-application assessment, participants experience was rated on a four-item scale: beginner, intermediate, advanced and expert.The participant level of smartphone usage experience varied from beginner to advanced.Usability experts observed the navigational and orientation skills the blind people have in performing common tasks/activities.The experts mainly judged the confidence and frustration level of the participants.Table III summarizes general information about the participants along with other indicators, i.e. information related to their background, age, gender and smartphone usage experience.The participants reported their level of experience in the initial trials.

B. Procedure
The participants were introduced to the BlindSense framework and a demonstration of the required steps to perform one-by-one was provided.The study spanned for eleven weeks and consisted of the following components: (1) pre-application usage assessment and collection of background data, (2) introductory session with our universal UI framework and initial trials, (3) in-the-wild device usage, (4) interviews and observations.A practice trial session on the general tasks and operational usage of several scenarios was performed.Participants were allowed to practice in a trial session on general tasks and activities such as unlocking the phone, placing a call, sending a message, etc.The participants were asked to perform 121 predefined tasks.Average time exercised on each participant was about 66 minutes.The researchers were directly involved in observing the execution of the tasks performed.Besides, we acquired the services of nine professional facilitators who assisted the participants during the entire study.We continued the sequence of grouping and interviewing up to finishing all participants in the same pattern.In addition, all participants were provided with a Samsung S6 and an HTC One smartphone running on android.The Talkback screen reading application and data collection service, and the BlindSense application were pre-installed on the devices.For each task we recorded the time of completion, the degree of accuracy in performing common activities like placing a call, sending messages, etc.The result section presents the responses collected through a structured questionnaire, interview, and observations.The university ethics committee/IRB has approved the consent procedure for this study.Written consent was obtained from the caretakers of the participants.The participants were informed about the study objective, study procedure, potential risks etc.The study checklist was verbally communicated to all blind people, with their verbal approval while the caretakers issued the written consent.

C. Analysis and Validation Procedures/Data Analysis
We run statistical correlation analysis of observations to define the relationship between UX attributes of the universal UI on attitude, intention to use, perceived usefulness, understandability and learnability, operability, ease of use, system usability scale, minimal memory load, consistency, and user satisfaction.The statistical software SPSS 21 using AMOS 21 was used for analysis and structuring modeling.The first step was to define a measurement model and test the relationship among several dependent and independent variables.The assessment of measurement model validity was conducted by checking goodness-of-fit indices (GFI).We used confirmatory factor analysis (CFA) using maximum likelihood to verify the reliability, convergent validity, composite reliability and average variance of each construct.The measurement model had 60 variables for 10 latent variables.In order to confirm the fitness of proposed model, the Chi-Square, Chi-Square/d.f., GFI, incremental fit index (IFI), normed fit index (NFI), comparative fit index (CFI), Tucker-Lewis index (TLI), parsimony goodness of Fit index (PGFI) and root mean square error of approximation (RMSEA) were assessed.The measures mentioned above indicated that the estimated covariance metrics of the proposed measurement and observed model were found satisfactory.The reliability test was accessed through Cronbach's alpha.The CFA model indicates that the overall fit index measurement model found a satisfactory ratio of Chi-Square to the degree of freedom (x2/df)=1.577,RMSEA=0.076,CFI=0.727,NFI=0.939,IFI=0.949,TLI=0.696,PGFI=0.539.
In addition, the measurement model was found to have strong internal reliability and convergent validity.The Cronbach's alpha values, item-total correlation, factor loading, composite reliability, and average variance extracted from the analysis report a robust fitness.Tables IV-VI show confirmatory factor loadings of each item with their respective reliability scores.The factor loadings having value above 0.05 are considered as acceptable in general practice, whereas the reported factor loadings exceeded 0.06.Similarly, the value of Cronbach's alpha reliability score 0.70 is considered an acceptable reliability score.In the reported data, the scores are above 0.70.In addition, to verify the internal consistency of each latent variable, we have measured the construct reliability too.It is acceptable when the composite reliability is higher than 0.07 and AVE is higher than 0.05.The reported score is mostly above the acceptable range of construct reliability.Figure 4 shows the diagram of the final structural model generated from the relationship of latent variables.The results are depicted in a standardized regression weights in different paths.All the paths were found significant at the level of p<0.001.As depicted, the perceived usefulness has an impact on the user satisfaction with high impact path weight (path coefficient=0.22).Research model overall had satisfactory variance in the user experience in operating adaptive user interfaces.In Tables VII a summary is presented.In respect of hypothesis: Perceived usefulness was positively associated with user satisfaction (H1, β=0.3030, p=0.016),Ease of Use (H2, β=0.469, p<0.000).Consistency (H3, β =0.287, p<0.023),System usability scale (H4, β=0.400, p<0.001) and consistency concerning ease of use (H5, β=0.320, p<0.011) had a positive effect on the user experience of blind people in using adaptive UI.The significance of all hypotheses was <005 thus each hypothesis is accepted.

VI. DISCUSSION
Understanding the need for developing an accessibilityinclusive UI for blind people, our research articulates usability, ease of use, consistency, usefulness, and accessibility for generating a simplified, consistent and universal UI design for blind people.The study proposed, developed and validated a blind-friendly universal UI design for operating common applications on smartphone resulting in enriched user experience.
As hypothesized, the parameters used, i.e. ease of use, consistency, operability, perceived usefulness, minimal memory load, system usability scale were found to have a positive effect to user satisfaction and user experience.Ultimately, a consensus was reached on the acceptance of using universal user interface model.The user's attitude towards the use of the suggested application was reported as effective, pleasant and enjoyable.For statistical validation, this study measured ease of use, consistency, operability, perceived usefulness, minimal memory load, system usability scale, understandability and learnability (i.e., the fundamental determinants of user acceptance of any Technology Acceptance Model (TAM)) through a survey questionnaire.The results resulted in a satisfactory response.Through a series of interventions of model evaluations and validations, the hypothesis that user satisfaction is positively affected by the adaptation of the universal UI design for blind people is accepted.The study also verified the relationship between the usability of UI and user satisfaction.User satisfaction is an important factor in the design of smartphone-based UIs.In addition, the study results are consistent with earlier studies on the usability and accessibility of smartphone applications.The findings collectively investigate that various features of the smartphone-based UIs and layouts such as screen size, user controls, navigational complexity, user interaction, and feedbacks convey positive psychological effects in a particular user context [30].Methodologically, a potential threat to the investigation is to undertake this approach on visually-dense interfaces such as game and entertainment applications.Besides, the potential of smartphone capabilities can be used for hedonic and utilitarian purposes [31].As depicted in the results, some users find the universal interface design to be a convenient and efficient one for completing their tasks, while others perceived this as a bit uncomfortable and annoying.Therefore, the including of more visually complex tasks may be further investigated.

VII. CONCLUSION
A large number of smartphone applications does not comply with the mobile accessibility guidelines.These applications do not specifically meet the requirements of blind people.Thus, these people are facing numerous challenges in accessing and operating smartphone interface components such as finding a button, understanding layouts, interface navigation etc. Besides, a blind person has to learn every new application, and their features resulting in penetrating learnability and discoverability.They have to learn and apply their previous experience and this may result in varying user experience.The findings of this study illustrated that a simplified, semantically consistent, and context-sensitive universal UI design contributes to having a satisfactory positive evaluation.The main contribution of this proposed research study was an attempt to improve the user experience of blind people in operating smartphones through a universal interface design by using adaptive UI paradigm for personalization.We have adopted measurement items from existing web/mobile usability and revamped a number of parameters for this study.The proposed solution addressed the problems of simplicity, reduction, organization, and prioritization [32] by providing a semantically consistent, simplified, task-oriented, and contextsensitive UI design.During the study, the proposed intervention has significantly reduced the cognitive user overload.The consistency in the division of smartphone screen enables blind people to memorize the flow of activities and actions with ease.Thus there is a slim chance of getting lost in a given navigation workflow.
Our results illustrate that our proposed solution is more robust, easy to use and adaptable than other solutions operated through the accessibility services.Our future work will focus on extending this framework for visually complex/navigationally-dense applications.Emotion-based UIs design may also be investigated further.Moreover, the optimization of GUI layouts and elements will be considered in the particular focus with gesture control systems, and eyetracking systems.

TABLE I .
COMMON USABILITY ISSUES IN TOUCHSCREEN USER INTERFACES FOR BLIND PEOPLE

TABLE V .
INTERNAL RELIABILITY AND CONVERGENT VALIDITY -PART I

TABLE VI .
INTERNAL RELIABILITY AND CONVERGENT VALIDITY -PART II UC: Unstandardized Coefficient, SC: Standardized Coefficient, SE: Standard Error, P: Significance