Tag Archives: Qualitative research

Topic 0020: Assessment vs Evaluation

What is the difference between “assessment” and “evaluation?”

Assessment is the process of objectively understanding the state or condition of a thing, by observation and measurement. Assessment of teaching means taking a measure of its effectiveness. “Formative” assessment is measurement for the purpose of improving it. “Summative” assessment is what we normally call “evaluation.”

Evaluation is the process of observing and measuring a thing for the purpose of judging it and of determining its “value,” either by comparison to similar things, or to a standard. Evaluation of teaching means passing judgment on it as part of an administrative process.

Ideally, a fair and comprehensive plan to evaluate teaching would incorporate many data points drawn from a broad array of teaching dimensions. Such a plan would include not only student surveys, but also self-assessments, documentation of instructional planning and design, evidence of scholarly activity to improve teaching, and most importantly, evidence of student learning outcomes.

Reference: http://www.itlal.org/?q=node/93

 

 

TOPIC 0019: Writing Your Literature Review

What is the literature review?

  1. A literature review summarises, critically analyses and evaluates previous research available on the subject, presenting this in an organised way. It should address a clearly articulated question or series of questions
  2. It is NOT:
    • A descriptive list or summaries of books/articles etc
    • An exhaustive bibliography on everything ever written on the topic- you need to make a decision about what to include
    • Your arguments and ideas (like an essay)

Why do we write a literature review?

  • Demonstrate an in-depth understanding of your topic area including key concepts, terminology, theories and definitions
  • Identify who the major thinkers are
  • Identify what research has been done in that area
  • Find gaps in the research or current areas of interest to help you formulate your own research question
  • Identify the main research methodologies in your subject area
  • Identify main areas of agreement or controversy
  • convince the reader that your research questions are significant, important and interesting
  • convince the reader that your thesis will make an original contribution to the area being investigated.

Steps to complete the literature review

  1. Find relevant literature on your topic and follow trails of references
  2. Identify themes/ideas/theories/approaches to the topic that have emerged from reading
  3. Introduce ideas by themes/theory/approach/chronologically or any other appropriate structure but do not just list different authors’ viewpoints
  4. Introduce and explain each theme (or theory/approach), present evidence from readings (agreements/ disagreements), critically commentate and relate to your own research

Critical Questioning

  1. Who is the author?
  2. What is the authors central point or main argument?
  3. What findings and conclusions are made?
  4. What evidence is used to support the conclusions?
  5. Is the evidence relevant? What methodology has the author used? What are the strengths and limitations?
  6. Does the author make any assumptions?
  7. What is not being said?
  8. Is there any explicit or hidden bias?
  9. How is the text relevant to YOUR project or assignment?
  10. How does this link with other texts that you have read?

(SYTHESIZING INFORMATION REFER TO; Topic 0007: Matrix Method for Literature Review – Approaches to Identify Research Gaps and Generate RQ)

Slide1.jpg
Figure 1 Structuring Literature Review

Topic  (broad to narrow)

Research Title: The Design and Developement of E-Portfolio for HIE’S in Social Sciences and Humanities

  • 2.1 Chapter Overview
  • 2.2 E-Learning in Malaysia
  • 2.3 E-Portfolio in HIE’s
  • 2.4 E-Portfolio Definition and Purpose
  • 2.5 E-Portfolio Reflective Learning Strategies
    • 2.5.1 Critical Thinking
    • 2.5.2 Problem-Solving
    • 2.5.3 Analytical Skills
  • 2.6 Conclusion and Gaps for Further Study

Critical Writing in a Literature Review

  1. Comparing and contrasting different theories, concepts etc and indicating the position you are taking for your own work
  2. Showing how limitations in others work creates a research gap for you.
  3. Strategic and selective referencing to support the underpinning arguments which form the basis of your research
  4. Synthesising and reformulating arguments from various sources to create new/more developed point of view
  5. .Agreeing with/defending a point of view or finding
  6. Accepting current viewpoints have some strengths but qualifying your position by highlighting weaknesses
  7. Rejecting a point of view with reasons (e.g. Lack of evidence)
  8. Making connections between sources

Adapted from RIDLEY, D 2008. The literature review: a step-by- step guide for students.  London: Sage


Topic 0018: Design and Development Methods and Strategies

Design and Development Methods & Strategies

Design and development research uses a wide variety of methodologies. Evaluation research techniques (both quantitative and qualitative) are also included in many studies.  The list of tables represent the methods commonly used in design and development research;

 Common Methods Employed in Design and Development Research

Type of Research Project Emphasis Research Method Employed
Product & Tool Research Comprehensive Design & Development Projects case study, content analysis, evaluation, field observation, in-depth interview
Product & Tool Research Phases of Design & Development Case study, content analysis, expert review, , field observation, in-depth interview, survey
Product & Tool Research Tool Development & Use Evaluation, expert review, in-depth interview, survey
Model Research Model Development Case study, delphi, in-depth interview, literature review, survey, think-aloud methods
Model Research Model Validation Experimental, expert review, in-depth interview
Model Research Model Use Case study, content analysis, field observation, in-depth interview, survey, think-aloud methods

Are DDR is MIXED METHODS?

The terms ‘mixed method research’ has been used to describe those studies that combine qualitative and quantitative methods. This is a way of using multiple approaches to answer given research questions.

Ross and Morisson (2004) support this trend when they take the position that qualitative and quantitative approaches are more useful when used together than when either is used alone and when combine, are likely to yield a richer and more valid understanding.

Johnson and Onwuegbuzie (2004) describe the areas of agreement between two advocates of both positions:

  • While individual perception are affected by one’s background experience, there are not different realities, but simply varying perceptions.
  • It possible for one data set to be compatible with different theories, i.e., there are alternative explanations of phenomena.
  • There is not always final proof of empirical research.
  • Researchers are bound by their values, attitudes and beliefs.

The fundamental agreement make mixed methods research not only practical in many situation, but also sound logically. However, in many design and development studies that employ multiple research methods are not mixing the qualitative and quantitative orientations. They are simply making use of a variety of similar strategies.

Topic 0017: Internet-Mediated Research (IMR)

Aspect

Description

State of art

Data-gathering stage that IMR methods have the biggest impact, as opposed to other stages and processes in the methodology of a piece of research, such as conceptualisation, interpretation, etc. (e.g.Hewson, 2008)

IMR approaches which seemed viable at this time included interviews, focus groups and observational studies which used linguistic data, and procedures for implementing these methods were devised and piloted (e.g. interviews: Chen & Hinton, 1999; Murray & Sixsmith, 1998; focus groups: Gaiser, 1997; Tse, 1999; Ward, 1999; linguistic observation: Bordia, 1996; Workman, 1992; Ward, 1999).

  • most relevant to qualitative social science research
  • cost and time efficiency,
  • expanded geographical reach, and
  • access to hard-to-reach

Tools, technologies, procedures

Synchronous approaches which gather data in real time

  • debate, clarification and resolution
  • more rigorous procedures
  • obtaining signed consent forms

Asynchronous approaches is to follow them in ‘real-time’ e.g, subscribe to a group and follow an ongoing discussion

Design issues and strategies

  • Solely text-based methods
  • Lack of physical proximity
    • facial expressions,
    • tone of voice,
    • body language, etc.

Depth and reflexivity

  • offering potentially more time for thoughtful, reflective responses
  • elicitation of richer,
  • more elaborate qualitative data.
  • enhance reflexivity and the depth and accuracy of interviewees’ responses Bowker & Tuffin (2004); Kenny (2005); McDermott & Roen (2012); Murray & Sixsmith (1998); O’Connor & Madge (2002).

Levels of rapport

Getting to know’ participants may help in maximising levels of confidence in the authenticity of the accounts they offer during the course of an interview.

Anonymity, disclosure, social desirability

  • high degree of privacy (Hewson et al, 2003)
  • higher levels of self-disclosure than ftf contexts (e.g. Bargh, McKenna & Fitzsimons, 2002; Joinson, 2001)

Principles for good practice in qualitative internet-mediated interview and focus group research.

  1. Use robust, well-tested procedures which have been well-piloted (reliability).
  2. Use the simplest low-tech solutions and equipment available that will do the job (reliability, accessibility).
  3. Use appropriate procedures for verifying identity (e.g. offline, audio / video) where this is crucial (e.g. highly sensitive research) (ethics, validity).
  4. Adopting clear strategies for establishing rapport has been shown to work well and is advisable (validity).
  5. Remain mindful of potential trade-offs when deciding upon procedures and making design choices, e.g. asynchronous approaches may facilitate depth and reflexivity but reduce conversational ‘flow’ (validity).
  6. Related to the above principle, remain aware of possibilities for combining methods, e.g. online and offline, asynchronous and synchronous, etc. (validity).
  7. Carefully consider security and confidentiality risks when making sampling and procedural design choices, and the ethical responsibility to inform potential participants of any non-trivial risks they may be unaware of (ethics, validity).
  8. Adopt procedures for maximising security and confidentiality where possible (e.g. setting up a dedicated online research site) (ethics, validity).
  9. Remain mindful of the possible threats to participant confidentiality and anonymity that can emerge from dissemination and publication procedures, and take careful measures to minimise these threats (ethics).
  10. Respect standards of privacy and netiquette, and pass participation requests through moderators where appropriate (e.g. sampling from mailing lists or newsgroups) (ethics).
  11. Make sure participation requests are well-constructed, containing information on researcher affiliations, contact details for further information, and value of the research (ethics, validity).
  12. Carefully consider how different sampling approaches and design procedures may facilitate or restrict access by different groups (accessibility, ethics).

Principles for good practice in qualitative internet-mediated observation and document analysis research.

  1. Keep in mind that different observation sites / sources may restrict or facilitate the design options available (e.g. using archived logs precludes participant approaches and makes disclosure / consent often implausible; observing real-time chat makes undisclosed, nonparticipant observation often untenable) (ethics, validity).
  2. Keep in mind the different types of dialogue and interaction that may be encouraged by synchronous (e.g playful) and asynchronous (e.g. reflective) technologies when selecting which is most appropriate (validity).
  3. Carefully consider whether undisclosed approaches are ethically justifiable, keeping in mind the following key factors: privacy expectations; sensitivity of data; levels of risk of harm; legal and copyright regulations; scientific value (ethics).
  4. Keep in mind that trade-offs will often emerge, especially in relation to ethics procedures, e.g. disclosure may increase the risk of reducing data authenticity and validity, but also reduce the risk of harming a group (ethics, validity).
  5. Keep in mind that it is often good practice to consult moderators before carrying out observation (e.g. of online discussion groups), particularly where disclosed approaches are proposed; however, moderators may also have agendas and opinions which could be prohibitive to the research, and in some not making contact is arguably justified (ethics, validity).
  6. Remain mindful of the increased traceability of data sources online, and the potential associated threats to anonymity / confidentiality, particularly in devising dissemination / publication strategies (ethics).
  7. Take steps to maximise data security, especially when utilising less secure technologies such as email (e.g. in soliciting documents) (ethics).
External Links

 

Topic 0016: Differentiate between Quantitative Research vs Qualitatitave Research

There are the different between Quantitative Research vs Qualitatitave Research;

Aspect

Quantitative

Qualitative

Definition

Concerned with discovering fact about social phenomena. Concerned with understanding human behaviour form infromants perspective.

Focus

Assumes and fixed and measurable reality Assumes a dynamic and negotiated reality

Purpose

  • To test theories
  • To construct facts
  • To differentiate
  • To correlate
  • To predict human behaviour
  • To explain statiscally
  • To complete the theories
  • To enhance knowledge
  • To explain facts
  • To explain the phenomena

Research Methods

  • Experimental
  • Quasi Experimental
  • Structure Interview
  • Structure Observation
  • Survey
  • Correlation

 

  • Observation
  • Non-structure interview
  • Document analysis
  • Case Study
  • Ethnography
  • Narrative
  • Grounded Theory

Strategy

  • Formal Interview
  • Structured
  • Survey
  • Documentation
  • Observation
  • Informal Interview
  • Non-Structure
  • Questionnaires
  • Audio Visual

 

Analisis Data

  • Raw data by number
  • Statistical
  • Deductive
  • Programming software such SPSS, SEM-AMOS, SMART-PLS etc.
  • Raw data by verbatim
  • Continous analysis
  • Not a statistical
  • Inductive
  • Programming software such Atlas.ti, Nvivo etc.

Data Interpretation

Generalization Summarization

Problem

  • Variables
  • Validity

 

  •  Time-consuming
  • Reliability
  • No standardized procedure
  • Not for population

Terminology

  • Explain
  • Variables
  • Hipotesis
  • Conceptual
  • Generalization
  • Explore
  • Element
  • Category
  • Theme
  • Patent
  • Model
  • Framework

Sampling Method

Probalility Sampling (Creswell, 2005), Kejcie and Morgan, 1970)

Refer Topic 0013: Choosing a Sampling Method

Non-Probalility Sampling  (Miles and Huberman,1994 dan Creswell, 2005)

Topic 0015: Need Assessment

Definition of Key Terms

“Need” refers to the gap or discrepancy between a present state (what is) and a desired state (what should be). The need is neither the present nor the future state; it is the gap between them.

“Target Group” Needs Assessments are focused on particular target groups in a system. Common target groups in education settings include students, parents, teachers, administrators, and the community at-large. Ideally, needs assessments are initially conducted to determine the needs of the people for whom the population or system. However, a “comprehensive” needs assessment often takes into account needs identified in other parts of a system. For example, a needs assessment might include the concerns of the “service providers” (e.g. teachers, guidance counselors, or school principals—the people who have a direct relationship with the service receivers) or “system issues” (e.g., availability of programs, services, and personnel; level of program coordination; and access to appropriate facilities).

A “Needs Assessment” is a systematic approach that progresses through a defined series of phases. Needs Assessment focuses on the ends (i.e., outcomes) to be attained, rather than the means (i.e., process). For example, reading achievement is an outcome whereas reading instruction is a means toward that end. It gathers data by means of established procedures and methods designed for specific purposes. The kinds and scope of methods are selected to fit the purposes and context of the needs assessment. Needs assessment sets priorities and determines criteria for solutions so that planners and managers can make sound decisions. Needs assessment sets criteria for determining how best to allocate available money, people, facilities, and other resources. Needs assessment leads to action that will improve programs, services, organizational structure and operations, or a combination of these elements.

Steps

Let’s take a quick look at general steps taken in a needs assessment.

  1. Exploration and identification. During the first phase of the needs assessment, you need to determine what you already know about your organization’s needs, whether it be additional resources, new technologies, or market expansion. It’s about figuring out where you are and where you want to be. You also need to discover other undisclosed needs that may be hindering you from moving from where you are to where you want to be. You will often rank these needs in order of importance. You will then set the scope of your research. In other words, the needs you are going to focus upon.
  2. Data gathering and analysis. At this stage you are collecting the information you need to better understand the gaps (needs) between where you are and where you want to be. Data may be collected from internal or externally records through research techniques such as surveys and document-analysis. After the data is collected, it is organized and analyzed.
  3. Utilization. This is where the data you analyzed is used to create a plan of action and implement it. You will set priorities, evaluate solutions, apply a cost-benefit analysis to determine which solution is best in light of the relative costs and benefits of each, formulate a plan to implement your solution, and then allocate the resources necessary for implementation. Again, the goal is to develop a plan to close the gaps between the desired future state and its current state.
  4. Evaluation.  You will evaluate the results of the action plan against the results: has the action plan placed you closer to where you want to be? Evaluation can help you determine what made an action plan successful or find the errors in your needs assessment. For example, did you miss an important gap, or were the resources you allocated insufficient to close the gap?

What Watkins Say?

Watkins describes analysis as “a process for breaking something down to understand its component parts and their relationships” (R. Watkins, personal communication, June 13, 2016). As to concluded, “A needs assessment identifies gaps between current and desired results and places those [gaps] in priority order on the basis of the costs to ignore the needs. They also stress that the gap is the need.

EDITOR'S PICK  NEEDS ASSESSMENT FIGURE 2.png

Figure 1: Needs Assessment vs. Needs Analysis Concept Map V2.0

 


Source adapt and taken from:

  1. http://study.com/academy/lesson/what-is-needs-assessment-definition-examples-quiz.html
  2. https://www2.ed.gov/admins/lead/account/compneedsassessment.pdf
  3. http://www.ispi.org/ISPI/Resources/PX/Articles/Editors__Pick/Needs_Assessment_vs_Needs_Analysis__What_s_the_Diff_.aspx

Topic 0013: Choosing a Sampling Method

There are many methods of sampling when doing research such as;

Probability methods: This is the best overall group of methods to use as you can subsequently use the most powerful statistical analyses on the results.
Simple Random Whole population is available
Stratified There are specific sub-groups to investigate (eg. demographic groupings).
Systematic When a stream of representative people are available (eg. in the street).
Cluster When population groups are separated and access to all is difficult, eg. in many distant cities.
Quota methods: For a particular analysis and valid results, you can determine the number of people you need to sample. In particular when you are studying a number of groups and when sub-groups are small, then you will need equivalent numbers to enable equivalent analysis and conclusions.
Quota You have access to a wide population, including sub-groups
Proportionate Qouta You know the population distribution across groups, and when normal sampling may not give enough in minority groups
Non-Proportionate Quota There is likely to a wide variation in the studied characteristic within minority groups
Selective methods: Sometimes your study leads you to target particular groups
Purposive You are studying particular groups
Expert You want expert opinion
Snowball You seek similar subjects
Modal Instance When sought ‘typical’ opinion may get lost in a wider study, and when you are able to identify the ‘typical’ group
Diversity You are specifically seeking differences, eg. to identify sub-groups or potential conflicts

Topic 0012: Sampling Terminology

Population

A population is the total group of people about who you are researching and about which you want to draw conclusions.

It is common for variables in the population being denoted by Greek letters and for those in the sample to be shown by Latin letters. For example standard deviation of the population is often shown with s (sigma), whilst of a sample is ‘s’. Sometimes as an alternative, capital letters are used for the population.

Sample frame

The list of people from whom you draw your sample, such as a phone book or ‘people shopping in town today’, may well be less than the entire population and is called a sample frame. This must be representative of the population otherwise bias will be introduced.

Screen Shot 2016-11-01 at 11.32.50 AM.png

Terminology Definition

Sample

 

When the population is large or generally inaccessible then the approach used is to measure a subset or sample.

Unit

A unit is the thing being studied. Usually in social research this is people. There may also be additional selection criteria used to choose the units to study, such as ‘people who have been police officers for at least five years.

Sample size

In order to be representative of the population, the sample must be large enough. There are calculations to help you determine this. The required sample size depends on the homogeneity of the population, as well as its total size.

Generalizing

After sampling you then generalize in order to make conclusions about the rest of the population.

Validity

Validity is about truth and accuracy. A valid sample is representative of the population and will allow you to generalize to valid conclusions. This aligns with external validity. A valid sample is both big enough and is selected without bias so it is representative of the population.

Bias

Bias, a distortion of results, is the bugbear of all research and it can be introduced by taking a sample that does not truly represent the population and hence is not valid.

Assignment

Having drawn the sample, these may be assigned to different groups. A common grouping is an experimental group which receive the treatment under study and a control group that gives a standard against which experimental results can be compared. To sustain internal validity, this is usually random assignment. Non-random assignment is sometimes ok, for example where two school classes are selected as coherent groups and one chosen as the control.

Sampling fraction

When there a sample of n people are selected from a population of N, then the sampling fraction is calculated as n/N. This may be expressed as a number (eg . 0.10) or a percentage (eg. 10%).

Sampling distribution

If the sample is described as a histogram (a bar chart showing numbers in different measurement ranges) it will have a particular shape. Multiple samples should have similar shapes, although random variation means each may be slightly different. The larger the sample size, the more similar sample distributions will be.

Sampling error

This is the standard error for the sample distribution and measures the variation across different samples. It is based on the standard deviation of the sample and the gap between this and the standard deviation of the population. Larger sample sizes will lead to a smaller sampling error. An estimate calculation for a single sample is: sm = sx / sqrt(N), Where:

  • sx  is the standard deviation of the sample
  • N is the sample size

Systematic error

A systematic error is one caused by human error during the design or implementation of the experiment.

Strata

Strata (singular: stratum) are sub-groups within a population or sample frame. These can be random groups, but often are natural groupings, such as men and women or age-range groups. Stratification helps reduce error.

Oversampling

Occurs when you study the same person twice. For example if you selected people by their telephone number and someone had two phone numbers, then you could end up calling them twice. This can cause bias.

Topic 0011: Conceptual Framework vs Theoretical Framework

blog-researcher reviews-100.jpg

Understanding of Conceptual Framework

This conceptual framework that is placed within logical and sequential design. It respresents less formal structure and used for studies in which existing theory is inapplicable or insufficient. It’s aslo based on specific concepts and propositions derived from the empirical observation and intuition. It may deduce theories from a conceptual framework.

Understanding of Theoretical Framework

The theory provides a point of focus for attacking the unknown in a specific area. If a relationship is found between two or more variables a theory should be formulated to explain why relationship exists. Theories are purposely created and formulated, never discovered, they can tested but never proven. The function of theory in research is to identify the starting point of the research problem and to establish the direction. It determines and defines the focus and objective of the research problem.

Conceptual Framework

Theoretical Framework

  • To clarify concepts and propose relationships among the concepts in a study.
  • To provide a context for interpreting the study findings
  • To explain the observations.
  • To encourage theory development that is useful to practice.
  • To test theories
  • To make research findings meaningful and generalizable
  • To establish orderly connection/interrelated
  • To predict and control
  • To derived from a specific concept and the proposition that are induced or deduced.
  • To stimulate research

How to Develop Conceptual and Theoretical Framework

  1. Select the concepts
  2. Identify the interrelationship among concepts
  3. Formulate definition; to develop a theoretical framework that can generate and test hypotheses, concept must be clearly defined.
    • conceptual definition (conveys the general meaning of the concept.
    • operational definition (adds another dimension to the conceptual by delineating the procedures or operations required to measure the concept.
  4. Formulating the conceptual and theoretical rational
    • Through the literature review, an researcher becomes aware of or confirms identified theoretical connection between variables.
  5. Framework lead data analysis
    • to observe behaviours, situations, interactions etc.
    • to answer research questions