Conceptualizing and Operationalizing Customer Perceived Ethicality (Cpe) In the Indian Service Sector

While CPE both as a concept and as a construct for further research has advanced in Western geographies, there is little research in Asian countries. This study addresses the gap by evolving a conceptual framework, and developing a scale for further research. Research is carried out in two phases: 1. An exploratory research using the long interview methodology to arrive at domain items of CPE unique to India, with specific reference to the Service sector. 2. Operationalising the construct CPE, based on two independent surveys using a Likert 7-point scale. 31 well-known brands representing six service industries are included. Using SPSS and AMOS 23, factor analysis is deployed to arrive at a reliable, valid instrument. A psychometrically sound measurement scale for CPE is developed for the first time in India, based on CPE Domain items resulting from the exploratory research. It can be used to advance research to study the relationships between CPE and brand-related Constructs, and in business, for understanding how CPE influences consumer decision making; help firms prioritize budgets and activities. The research fills a gap identified in similar research in Europe, calling for replicating studies both at a conceptual level as well as developing a measurement scale.
‘Measurement instruments that are collections of items combined into a composite score and intended to reveal levels of theoretical variables, not readily observable by direct means are referred to as scales’. The task is to reduce the variables into a set of meaningful, relatable set of items which can subsequently be administered to a sample and test for its psychometric properties as a practical measurement tool of CPE.
In the referred study, six items were identified as Initial Scale items, after analysing the contents of the exploratory study. Exploratory Factor Analysis (EFA) was performed on these six items and a single factor solution was subjected to Confirmatory Factor Analysis (CFA) for Scale refinement. It can be argued that all the 36 subdomain items in that study should have been used to identify the underlying factors, instead of condensing them into six key themes. However, Items from the larger list that seem to go together can be grouped together. Item Parcelling is an accepted procedure when it comes to factor analysis. Items are grouped into one or more ‘parcels’ and used instead of a detailed set of items, as the indicators of the target latent construct.
The progressive responses were collected in a spreadsheet in Google Drive, copied and arranged in different files and folders for subsequent processing. Once the data reached a sizeable number it was recast using prespecified codes for gender, age group, educational qualifications, occupation, income range, category of service, and brand chosen. The questionnaire items were also identified with abbreviated codes such as CPE1….5, etc. Surveys were started in May 2020 and completed in August. Around 350 responses were collected in Survey One. After dropping responses that had the same choice for all questions as well as outliers, 302 useful responses were identified. (These are indicative of responder fatigue and hence need to be discarded). Other responses where low or high values were present were retained as these might be genuine views on the questionnaire items.
The questionnaire had to be administered to an unknown audience, and collected through ‘Google forms’. There are software limitations that have to be overcome to get considered responses. The design ensured that every single question had to be answered before moving to the next. While this may result in losing reluctant respondents, it makes the data more accurate and eliminates missing data. Prevention is a superior solution compared to the use of software to deal with missing data subsequently. There were a total of 31 brand choices. The questionnaire had to be carefully designed to enable navigation to the corresponding choice and the corresponding page. The final format in google forms software had 78 pages in its design while the respondent had to go through just a few pages.