Friday 14 December 2018

Descriptive Research Assignment Help

Descriptive Research Assignment Help
Learning outcomes
On successful completion of this module you will be able to:
  • . determine when a descriptive research design is appropriate
  • . distinguish between nominal, ordinal, interval and ratio scales of measurement
  • . understand the criteria for good measurement: validity, reliability and sensitivity
  • . distinguish between the different types of attitude scales
  • . know how to design the questionnaire in an unbiased and effective manner
  • . understand the importance of pretesting the questionnaire
  • . understand the strengths and weaknesses of survey research.
Outline of this module
  • 6.1       Introduction
  • 6.2       Measurement and scaling
  • 6.3       Questionnairedesign
  • 6.4       Summary
6.1 Introduction
The purpose of descriptive research is to collect primary data from a sample of individuals which is representative of
[caption id="attachment_13683" align="alignleft" width="421"]Descriptive Research Assignment Help Descriptive Research Assignment Help[/caption]
the population of interest with surveys or questionnaires being the major research technique. Surveys require asking people (called respondents) for information using either verbal questions or written questions. Survey instruments include personal interviews, telephone interviews and self-administered mail questionnaires.
This module begins by looking at measurement and scaling before moving to the design and administration of questionnaires.
6.2       Measurement and scaling
After defining the research problem and choosing the appropriate research design, task of measurement needs to be undertaken. Unless variables are measured properly, we won't be able to test our hypotheses. The starting point in the measurement process is to identify the concepts (constructs) relevant to the problem. Each concept must be operationalized in order to be measurable. Each variable also needs to be measured in some way and the validity and the reliability of these measures need to be considered and the levels of measurement (Scales).
Types of scales
-           A scale is a tool or mechanism by which individuals are distinguished on the variables of interest in a study.
  • nominal scale: this is one that allows the subjects or respondents to be assigned to certain categories or groups. For example, with respect to gender, either male or female. The category is usually mutually exclusive and collectively exhaustive. Other nominal scales include religion, occupation, suburb etc.
  • ordinal scale: this has all the properties of a nominal scale but it also rankorders the categories. Ordinal level of measurement - numbers are assignednot only to categorise them but also to indicate a greater than or less thanrelationship.
  • interval scale: this has the properties of both nominal and ordinal withadditionalfeatures:
  • it incorporates the concept of equality of interval the arithmetic mean as the measure of central tendency can be calculated
  • the standard deviation is the measure of dispersion
  • data must be collected using interval scales to conduct parametric testingand data must fulfill the assumptions of parametric testing
  • it is the researcher's responsibility to defend the choice of intervals.
ratio scales: all the benefits of above plus the provision for absolute zeroor origin. They have to be measured as continuous variables. includes such as sales, profits, number of employees of  employees' however when measured as categories they should be considered nominal or ordinal.
Criteria for good measurement
Reliability
  • testshowconsistentlyameasuringinstrumentmeasureswhateverconceptit is measuring.
Validity
  • tests how well a measuring tnstrument measures the particular concept if is intended to measure. 
Sensitivity
-the instrument's ability to accurately measure variability in responses' 
Validity
  • ‘howcanwebereasonablysurethatweare measuring the concept we set out to measure,, Sekaran (1992, p. 171-173)
  • content validity:(facevalidity)dependingonthe sourcereferencetheseformsof validity may be similar or the same
  • face validity is a judgement by professionals (experts) that the measuresreallycapture the concept
  • content validity “isthefullcontentofadefinitionrepresented in the measure? Does the measure capture the entire meaning?
  • criterion-relatedvalidity:thisreferstotheabilityofsomemeasurestocorrelate with other measures of the same construct
  • Concurrent:thisreferstothedegreetowhichameasurecorrelateswithanother measure of the same variable, which has already been validated
  • predictive:thisisdefinedasthedegreetowhichameasurepredictsasecond future measure. Future events that are logically related to the construct.
  • construct validity: this is for measures with multiple items' Do the variousitems operate in a consistent manner?
  • Convergentvaliditymeansthatmultiplemeasuresothesameconstruct hang together or operate in similar ways
  • discriminant validity isthe opposite of convergent validity'
Reliability
Reliability refers to the ability of the measure to either maintain stability over time' or internal consistency. Reliability helps to assess the "goodness" of a measure. This is usually done after determining the validity of a measure, because it can be reliablebut totally lack validity. That is why validity is first measured.
-           stability
-           test-retest reliability is determined by examining the reliability coefficient obtained with repetition of an identicalmeasure on a second occasion.
  • parallel-form reliability is apparent when two forms have similar items andthe same |response format with only the wordings and the ordering ofquestions changed. Here the responses on two comparable sets ofmeasures tapping the same construct may be highly correlated.
  • internal consistency
  • inter-item consistency reliability is a test of consistency of respondents responses to all the items in a measure.
  • split-half reliability reflects the correlations between two halves of aninstrument. 
Developing a measure
In developing a reliable measure researchers should:
  • clearlyconceptualiseallconstructs/concepts
  • use a precise level of measurement. A higher level of measurement is likely tobe more reliable
  • use multiple indicators. Use more than one item to measure the variable
  • use pretests, pilot test and replication.
Sensitivity
Sensitivity is particularly important when measuring attitudes. It refers to an instrument's ability to accurately measure variability in responses' By allowing for a greater range of possible Scores the sensitivity of a scale can be increased. Questions that have low sensitivity, for example "agree or disagree", could beimprove  by using a five point (Likert) scale.
Attitude measurement
Thereareanumberof scaling techniquesthatarecommonlyusedin business research to measure attitudes either by:
  • Ranking
  • Rating
  • Sorting
Scaling is a procedure for the assignment of numbers (or other symbols) to a property of objects in order to impart some of the characteristics of numbers to the properties in question.
Rating scales are widely used in research. They might be used to measure such factors as perceptions or products, people, policies or organisations. People can rate themselves or others or rate printed materials or physical devices.
  • simple attitude scale
  • category scale
  • Likert scale
  • Semanticdifferential
  • numerical scale
  • constant-sum scale
  • Stapel scale
  • graphic rating scale
  • Thurstone equal appearing interval scale.
Rating scale errors
  • error of severity: the tendency to rate objects too low
  • error of leniency: the tendency to rate objects too high
  • error of central tendency: the tendency to rate objects in the middle
  • error of proximity: the tendency to rate objects similarly because they are neareach other
  • halo effect: the tendency to rate objects according to how the rater feels aboutit in general.
Ranking scales are used to rank order objects or factors by having respondents place them in order according to preselected criteria. The most common type is paired comparison.
  • . Multi DimensionalScaling (MDS)
  • . Conjoint analysis
6.3       Questionnaire Design
In this section the general principles of questionnaire design will be introduced. These principles apply to all types of questionnaires, whether administered by mail, in person, by telephone, via the internet or to international respondents. The questionnaire design process follows five steps as summarised in figure 6.1. These steps have been synthesised from leading authors in research design as well as from practical experience.
Questionnaire design process
  • Step 1:             Determine therequired information and from whom it should be sought
  • Step 2:             Determine the interview method and length of questionnaire
  • Step 3:             Prepare the draft questionnaire: Question content, question wording, response format and structure and layout.
  • Step 4:             Pretest and revise questionnaire
  • Step 5:             Assess reliability and validity of questionnaire.
Firstly the researcher needs to consider what information will be required and from whom it can best be obtained. Secondly, the most appropriate type of interview method (that is, mail, personally administered,telephone  orinternet) should be chosen and the desired length of questionnaire should be considered.
Next, the questionnaire can be drafted giving attention to appropriate question wording and content, suitable response format for interviewees, and functional yet attractive structure and layout of the instrument.
Then the questionnaire can be pretested on a small sample of people to check that it works as intended. Necessary revisions can be made at this stage. Finally, the researcher should apply strategies that will assess the reliability and validity of the questionnaire.
These five general principles of questionnaire design will now be explained in greater detail.
Step 1: Determine the required information and from whom it should be sought 
To be able to identify the information you need, refer to your research questions andhypotheses. You will need to obtain data that will enable the hypotheses to be tested. The data will also need to be collected in a form that is suitable for the proposed dataanalysis method. Considerwho will be able to supply the information. It will need to be someone who has the required knowledge, access to the information and authority to provide it.
Step 2:            Determine the interview method and length of questionnaire
The choice of mail, personally administered, telephone or internet interviews will impact on the questionnaire design. Mail questionnaires are self-administered by the respondent, so clear instructions must be given and the questions must be simple. With personally administered (or face-to-face) interviews more complex questions and explanations can be used because of the greater interaction possible between the interviewee and respondent. Telephone interviews need to be kept short andsimple as respondents may find them intrusive. Questionnaires delivered over the internet should have the simplicity of mail questionnaires as well as simple procedures for responding, such as clicking buttons for choices.
The length of the questionnaire will depend on the amount of data required, the cost of the survey and the likely response rate. Lengthy questionnaires may result in reduced response.
Step 3: Prepare the draft questionnaire 
Four features should be considered in turn when preparing a draft questionnaire: question content, question wording, the desired format for responses and the structure and layout of the questionnaire. Each of these features will be briefly addressed.
Question content
The question content will be determined by the information required in step 1, that is, by reviewing the research objectives and seeing what needs to be addressed. You will need to consider whether the respondent will be able to provide a response; that is, whether the respondent has access to the necessary information or will want to provide it.
Question wording 
The choice of wording is critical in questionnaire design. To maximise the rate of response to questions, design the questions so that they are easy to answer. Participants are also more likely to respond if they feel questions are appropriate, relevant and neutral.
Response format
The main consideration for the choice of response format is the data analysis method, which may specify a particular type of measurement. In addition, you may wish to make comparisons with your survey results against previous research. If so, it is recommended that you use similar response categories so that meaningful comparisons may be made. Note also that the Australian Bureau of Statistics (ABS) produces many research publications that you may refer to for examples of gathering data on demographics such as gender or marital status. Also consider the respondent's effort in answering questions: ticking a list of choices may be easier than providing an open-ended answer. 
Example of question wording
Consider the following examples of questions in a survey investigating the extent of personal use of the internet:
  • An appropriate question
  • Do you have access to the internet from your home?
  • This question is appropriate because the respondent can see it is linked to the study purpose.
  • An inappropriate question
  • What is your marital status?
  • (Please tick one box.)
  • Single
  • Married
  • Separated/divorced
  • Other
This question is inappropriate because the respondent will have difficulty finding a connection with the study purpose. The researcher may need to preface the question with an explanation. 
  • A relevant question
  • How often do you access the internet each week?
  • This question is relevant to the study purpose. 
An irrelevant question
  • What is your weekly expenditure on groceries?
  • This question is not connected to the study purpose and hence appears irrelevant. 
  • A neutral question
  • How many hours each week do you spend "surfing the Net'.?
A loaded question
  • How many hours each week do you waste "surfing the Net"?
  • The second question is judgmental as it implies that a person is being unproductive when using the internet.
  • There are three main types of response format that can be used: open-ended (unstructured), close-ended (structured), and scaled-response.
Open-ended questions are suitable where precise information is required, but to list all possible answers would be difficult or lengthy. They can also be used to encourage respondents to express themselves freely, such as in an exploratory survey.
Close-ended questions can be categorised as single, where one response is required, dichotomous, where two response items are provided, or multichotomous, where a number of alternatives are listed.
Examples of close-ended questions
  • Single close-ended question:
  • What is your current age?.............................Years
  • Dichotomous close-ended question:
  • Did you complete your senior Certificate?
  • (Please tick one box)
  • Yes
  • No
Multichotomous close-ended question: Which of the following services that are provided by the student Association have you-accessed this year? (More than one box may be ticked.)
  • Medical Centre
  • Counseling service
  • Careers guidance
  • Sports club
  • Other (please describe)
Scaled-response questions require the use of a scale to measure the attributes of the construct.
Below is a list of statements about pre-enrolment courses. Please indicate whether you agree or disagree with the statement, by circling the number that best represents your answer.
I found the "How to study" course helpful in preparation for University studies.
Strongly agree             Agree              undecided                   Disagree          Strongly disagree
1                                  2                      3                                  4                      5

A summary of the benefits and limitations of response format choices is given in the following Table.
Benefits and limitations of response format choices
Type of responses formatBenefitLimitations
Open-endedRespondents can express themselves freely

Avoids listing all possible answers

Useful in developing response items for close-ended questions

Cater for respondents who like to answer in own words
Respondents may speak at length

Respondents may write too briefly

Problems with interpreting handwriting

Need for postcoding of answers

Can be demanding on respondent

More time consuming to complete

More difficult to analyse
Close-endedEasier to use by both respondent and researcher

Respondents can recognize a response rather than remember it

Data can be gathered ready for analysis

Responses can be pre-coded

Answers are less variable and can be meaningfully compared

Higher response rate and less missing data
Choices may "lead" the respondent

Must ensure all possible responses are mutually exclusive and exhaustive

Possible response bias if respondent does not read
question carefully
Scaled- responseUseful where information is difficult to quantify

Useful for sensitive topics

Easy to use

Items can be reworded to check reliability
Possible responses bias if respondent does not read question carefully.
Structure and layout
The order of questions can affect the motivation of respondents questionnaire. to complete the sometimes a screening question, verifying therespondent’s eligibility to complete the questionnaire, appears first. Following the screening questions, the opening questions should be simple, interesting and non-sensitive in order to gain respondent cooperation.
Step 4: Pretest and revise questionnaire
Pretesting is an important stage to ensure that potential problems are identified and eliminated. Respondents in the pretest can tellyou the amount of time needed to complete the questionnaire. It is recommended that three groups of people be used to scrutinise the questionnaire: colleagues/fellow researchers; potential users of the data; and a sample of the potential respondents. There is no rule of thumb, but around 20 to 30 people could be included in total. A pretest sample of this size will allow you to try out the data analysis technique as well as check the properties of the data collected.
Colleagues are chosen because they understand the study's purpose and they have similar training as the researcher. Their function is to determine whether the questionnaire will be able to accomplish the survey objectives.
Potential users of the data are people with a high degree of knowledge about the topic of interest. Their function is to check the accuracy and completeness of the question content.
The sample of potential respondents is used to test that the questionnaire functions properly. The sample should reflect the diversity of the population of interest. Check to see that respondents manage to answer the questions as intended. Following pretesting, alterations should be made to the questionnaire in line with the feedback obtained. Don't just pretest once. Keep pretesting until you are satisfied that no more changes are required to improve the questionnaire.
The final step in the questionnaire design process is to assess reliability and validity.
Step 5: Assess reliability and validity of questionnaire
An important issue in questionnaire design is whether the instrument accurately and consistently measures what it is supposed to measure. In other words, the questionnaire should be valid and reliable.
A questionnaire is valid if it measures what it is supposed to measure and it is reliable if the responses are consistent and stable. Internal validity is concerned with the degree of confidence the researcher has in the causal effects between variables. Externalvalidity is concerned with the ability to generalise the findings ofthe research from a specific setting and sample to a much broader range of populations and settings. The issues of reliability and validity are covered in greater detail in other sections of this unit but please be aware that they are critical in the questionnaire design process.
Weakness of surveys
The non-response problem
How do we dealwith the (frequently) large number of persons who do not return the questionnaire, or respondents who answer only some of the questions asked? This is especially important if the respondents are not representative of the population (or those who respond are part of a special interest group). Perhaps the problem could be partly alleviated by taking a stratified random sample in the first place.
In some cases it is possible to "salvage" the survey results by weighting the responses of the under-represented group among the responses. However this should only be done if there is a fairly good estimate of the total proportion of the under-represented group in the population, and even then with great caution. Where it is possible to identify the non-respondents, it may be appropriate to conduct a further survey (follow up). 
Errors in survey research
  • . random sample error
Random sampling error arises when there is a difference in the results of the sample and the results of the census using identical procedures. Chance variations are possible even with the use of proper random probability sampling.
. systematic error (non-sampling error)
Results from an imperfect research design or a mistake in the execution of the research.Error occurs when the results deviate in one direction or the other from the true value of the population parameter.
  • two types of systematic error
  • respondent error
Respondents do not cooperate or do not give truthful answers.
non-response error (as discussed earlier)
Those who did not respond are not representative of those who did particularly in the area of telephone and personalsurveys in the form of not-at-home or refusals. Self-selection bias can occur if selfadministered.
  • response bias
Respondent intentionally or inadvertently falsifies the response or by an unconscious misinterpretation of the respondent's answer.
  • reduction in response bias includes:
  • second mail out
  • call back or schedule another convenient time
  • Chi square test for goodness of fit.
  • Administrative error
  • improper administration or execution of the research task
  • Types of error are classified:
  • data processing error
  • sample selection error
  • interview error
Conclusion
This module considered the major descriptive research technique of surveys. It was highlighted that while surveys could be used in all research designs, they were the major technique used in descriptive designs. The topic began by considering measurement, in particular the four basic scales of measurement and the criteria for good measurement. In addition the concepts of reliability, validity and sensitivity were introduced.
The major communication approaches of personal, telephone, mail and internet were outlined and compared and key factors in determining the most appropriate communication method for a survey were discussed. Next, consideration was given to some general guidelines for increasing response rates.

No comments:

Post a Comment