Tag Archives: Research question

Designing SMART Research Objectives

What are the most common weakness formulating research question?

  1. Is it linked to the theory
  2. Provide fresh insight
  3. Is the topic clearly stated
  4. Is doable in the available time
  5. Is realistic in terms of knowledge and skills
  6. Do you have an access to data

Objectives must always be set after having formulated a good research question. After all, they are to explain the way in which such question is going to be answered. Objectives are usually headed by infinitive verbs. (Refer Bloom Taxanomy/Revised Bloom Taxanomy)

It can be difficult to develop realistic research objectives. There are common pitfalls such as the scope being too broad, not including enough detail, being too simplistic, being too ambitious, etc.  Use these S.M.A.R.T. guidelines to try and develop your objectives:

  1. Specific – avoid general statements, include detail about what you are going to do.
  2. Measureable – there should be a definable outcome.
  3. Achievable – be realistic in what you hope to cover, don’t attempt too much. A less ambitious but completed objective is better than an over-ambitious one that you cannot Rpossible achieve.
  4. Realistic – think about logistics. Are you practically able to do what you wish to do? Factors to consider include: time; expense; skills; access to sensitive information; participant’s consent; etc.
  5. Time constrained – be aware of the time-frame of the project.

Example

Title: An investigation into the student use of e-books at Bolton University.

Aims: Many academic libraries have expanded their library provision by the acquisition of e-books. Despite this strategic direction, the literature reveals that relatively little is known about student perceptions and attitudes towards e-books. Consequently, this research aims to narrow this research gap and conduct empirical research into student perceptions towards e-books and their frequency of use. The results will be used to provide recommendations to library management to improve the quality of service provision regarding e-books.

Research Objectives: The above aim will be accomplished by fulfilling the following research objectives:

  1. Review the literature concerning the student uptake and experience of e-books in academic libraries.
  2. Investigate perceptions and attitudes towards e-books and the usage of e-books at the University of Bolton.
  3. Compare usage statistics between various user-groups, e.g. full-time, part-time, course type, etc.
  4. Identify if any improvements or alterations are required to facilitate a high service quality provision in relation to the e-books service at Bolton University library.

References

  1. Designing Research Aims and Objectives
  2. How to Write SMART Research Objectives
  3. http://www.bolton.ac.uk/bissto/Writing-a-Dissertation/Topic-Aims-Objectives.aspx#sthash.2L0PRody.dpuf

Topic 0022: Anderson and Krathwohl – Bloom’s Taxonomy Revised

Understanding the New Version of Bloom’s Taxonomy

A succinct discussion of the revisions to Bloom’s classic cognitive taxonomy by Anderson and Krathwohl and how to use them effectively

Source taken from: http://thesecondprinciple.com/teaching-essentials/beyond-bloom-cognitive-taxonomy-revised/

Background:

Who are Anderson and Krathwohl? These gentlemen are the primary authors of the revisions to what had become known as Bloom’s Taxonomy — an ordering of cognitive skills.  (A taxonomy is really just a word for a form of classification.) This taxonomy had permeated teaching  and instructional planning for almost 50 years before it was revised in 2001. And although these crucial revisions were published in 2001, surprisingly there are still educators who have never heard of Anderson and Krathwohl or their important work in relation to Bloom’s Cognitive Taxonomy. Both of these primary authors were in a perfect position to orchestrate looking at the classic taxonomy critically. They called together a group of educational psychologists and educators to help them with the revisions. Lorin Anderson was once a student of the famed Benjamin Bloom, and David Krathwohl was one of Bloom’s partners as he devised his classic cognitive taxonomy.

Here in the United States, from the late 1950s into the early 1970s, there were attempts to dissect and classify the varied domains of human learning – cognitive (knowing, or head), affective (emotions, feelings, or heart) and psychomotor (doing, or kinesthetic, tactile, haptic or hand/body). The resulting efforts yielded a series of taxonomies for each area.  The aforementioned taxonomies deal with the varied aspects of human learning and were arranged hierarchically, proceeding from the simplest functions to those that are more complex. Bloom’s Cognitive Taxonomy had been a staple in teacher training and professional preparation for almost 40 years before Anderson and Krathwohl instituted an updated version. An overview of those changes appear below.

While all of the taxonomies above have been defined and used for many years, there came about at the beginning of the 21st century in a new version of the cognitive taxonomy, known commonly before as Bloom’s Taxonomy. You can also search the Web for varied references on the other two taxonomies — affective or psychomotor. There are many valuable discussions on the development of all the of the hierarchies, as well as examples of their usefulness and applications in teaching. However, it is important to note that in a number of these discussions, some web authors have mislabeled the affective and psychomotor domains as extensions of Bloom’s work. These authors are in grave error. The original cognitive domain was described and published in 1956. While David Krathwohl was one of the original authors on this taxonomy the work was named after the senior or first author Benjamin Bloom. The affective domain was not categorized until 1964 and as David Krathwohl was the lead author on this endeavor, it should bear his name, not Bloom’s. Bloom had nothing to do with the psychomotor domain and it was not described or named until the first part of the 1970s. There are 3 versions of this taxonomy by 3 different authors — Harrow (1972); Simpson (1972); and Dave (1970) See full citations below.

The Cognitive Domain:

The following chart includes the two primary existing taxonomies of cognition. Please note in the table below, the one on the left, entitled Bloom’s, is based on the original work of Benjamin Bloom and others as they attempted in 1956 to define the functions of thought, coming to know, or cognition. This taxonomy is almost 60 years old. The taxonomy on the right is the more recent adaptation and is the redefined work of Bloom in 2000-01. That one is labeled Anderson and Krathwohl.  The group redefining Bloom’s original concepts, worked from 1995-2000. As indicated above, this group was assembled by Lorin Anderson and David Krathwohl and included people with expertise in the areas of cognitive psychology, curriculum and instruction, and educational testing, measurement, and assessment. The new adaptation also took into consideration many of Bloom’s own concerns and criticisms of his original taxonomy.

As you will see the primary differences are not in the listings or rewordings from nouns to verbs, or in the renaming of some of the components, or even in the re-positioning of the last two categories. The major differences lie in the more useful and comprehensive additions of how the taxonomy intersects and acts upon different types and levels of knowledge — factual, conceptual, procedural and metacognitive. This melding can be charted to see how one is teaching at both knowledge and cognitive process levels. Please remember the chart goes from simple to more complex and challenging types of thinking.

Taxonomies of the Cognitive Domain

Bloom’s Taxonomy 1956 Anderson and Krathwohl’s Taxonomy 2001
 1. Knowledge: Remembering or retrieving previously learned material. Examples of verbs that relate to this function are:

know identify relate list define recall memorize repeat record name recognize acquire
1. Remembering:

Recognizing or recalling knowledge from memory. Remembering is when memory is used to produce or retrieve definitions, facts, or lists, or to recite previously learned information.

 2. Comprehension: The ability to grasp or construct meaning from material. Examples of verbs that relate to this function are:  

restate locate report recognize explain express identify discuss describe discuss review infer illustrate interpret draw represent differentiate conclude
2. Understanding

Constructing meaning from different types of functions be they written or graphic messages or activities like interpreting, exemplifying, classifying, summarizing, inferring, comparing, or explaining.

 3. Application: The ability to use learned material, or to implement material in new and concrete situations. Examples of verbs that relate to this function are:  

apply relate develop translate use operate organize employ restructure interpret demonstrate illustrate practice calculate show exhibit dramatize
 3. Applying

Carrying out or using a procedure through executing, or implementing. Applying relates to or refers to situations where learned material is used through products like models, presentations, interviews or simulations.  

 4. Analysis: The ability to break down or distinguish the parts of material into its components so that its organizational structure may be better understood. Examples of verbs that relate to this function are:  

analyze compare probe inquire examine contrast categorize differentiate contrast investigate detect survey classify deduce experiment scrutinize discover inspect dissect discriminate separate
 4. Analyzing

Breaking materials or concepts into parts, determining how the parts relate to one another or how they interrelate, or how the parts relate to an overall structure or purpose. Mental actions included in this function are differentiating, organizing, and attributing, as well as being able to distinguish between the components or parts. When one is analyzing, he/she can illustrate this mental function by creating spreadsheets, surveys, charts, or diagrams, or graphic representations.

 5. Synthesis: The ability to put parts together to form a coherent or unique new whole. Examples of verbs that relate to this function are:  

compose produce design assemble create prepare predict modify tell plan invent formulate collect set up generalize document combine relate propose develop arrange construct organize originate derive write propose
5. Evaluating

Making judgments based on criteria and standards through checking and critiquing. Critiques, recommendations, and reports are some of the products that can be created to demonstrate the processes of evaluation.  In the newer taxonomy, evaluating comes before creating as it is often a necessary part of the precursory behavior before one creates something.    

 6. Evaluation: The ability to judge, check, and even critique the value of material for a given purpose. Examples of verbs that relate to this function are: 

judge assess compare evaluate conclude measure deduce argue decide choose rate select estimate validate consider appraise value criticize infer
6. Creating:

Putting elements together to form a coherent or functional whole; reorganizing elements into a new pattern or structure through generating, planning, or producing. Creating requires users to put parts together in a new way, or synthesize parts into something new and different creating a new form or product.  This process is the most difficult mental function in the new taxonomy. 

Table 1.1 – Bloom vs. Anderson/Krathwohl

________________________________________________________________

changes from ppt(Diagram 1.1, Wilson, Leslie O. 2001)

Note: Bloom’s  taxonomy revised – the author critically examines his own work – After creating the cognitive taxonomy one of the weaknesses noted by Bloom himself was that there is was a fundamental difference between his “knowledge” category and the other 5 levels of his model as those levels dealt with intellectual abilities and skills in relation to interactions with types of knowledge. Bloom was very aware that there was an acute difference between knowledge and the mental and intellectual operations performed on, or with, that knowledge. He identified specific types of knowledge as:

  • Terminology
  • Specific facts
  • Conventions
  • Trends and sequences
  • Classifications and categories
  • Criteria
  • Methodology
  • Principles and generalizations
  • Theories and structures

Levels of Knowledge – The first three of these levels were identified in the original work, but rarely discussed or introduced when initially discussing uses for the taxonomy. Metacognition was added in the revised version.

  • Factual Knowledge – The basic elements students must know to be acquainted with a discipline or solve problems.
  • Conceptual Knowledge – The interrelationships among the basic elements within a larger structure that enable them to function together.
  • Procedural Knowledge – How to do something, methods of inquiry, and criteria for using skills, algorithms, techniques, and methods.
  • Metacognitive Knowledge – Knowledge of cognition in general, as well as awareness and knowledge of one’s own cognition.  (29)

(Summarized from: Anderson, L. W. & Krathwohl, D.R., et al (2001) A taxonomy for learning, teaching and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman.)

One of the things that clearly differentiates the new model from that of the 1956 original is that it lays out components nicely so they can be considered and used. Cognitive processes, as related to chosen instructional tasks, can be easily documented and tracked. This feature has the potential to make teacher assessment, teacher self-assessment, and student assessment easier or clearer as usage patterns emerge. (See PDF link below for a sample.)

As stated before, perhaps surprisingly, these levels of knowledge were indicated in Bloom’s original work – factual, conceptual, and procedural – but these were never fully understood or used by teachers because most of what educators were given in training consisted of a simple chart with the listing of levels and related accompanying verbs. The full breadth of Handbook I, and its recommendations on types of knowledge, were rarely discussed in any instructive or useful way. Another rather gross lapse in common teacher training over the past 50+ years is teachers-in-training are rarely made aware of any of the criticisms leveled against Bloom’s original model.

Please note that in the updated version the term “metacognitive” has been added to the array of knowledge types. For readers not familiar with this term, it means thinking about ones thinking in a purposeful way so that one knows about cognition and also knows how to regulate one’s cognition.

bloom knowledge chartKnowledge Dimensions Defined:

Factual Knowledge is knowledge that is basic to specific disciplines. This dimension refers to essential facts, terminology, details or elements students must know or be familiar with in order to understand a discipline or solve a problem in it.

Conceptual Knowledge is knowledge of classifications, principles, generalizations, theories, models, or structures pertinent to a particular disciplinary area.

Procedural Knowledge refers to information or knowledge that helps students to do something specific to a discipline, subject, or area of study. It also refers to methods of inquiry, very specific or finite skills, algorithms, techniques, and particular methodologies.

Metacognitive Knowledge is the awareness of one’s own cognition and particular cognitive processes. It is strategic or reflective knowledge about how to go about solving problems, cognitive tasks, to include contextual and conditional knowledge and knowledge of self.

*A comprehensive example from the book is provided with publisher permission athttp://www.scribd.com/doc/933640/Bloom-Revised

Sources:

Anderson, L. W. and Krathwohl, D. R., et al (Eds..) (2001) A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Allyn & Bacon. Boston, MA (Pearson Education Group) **There is a newer (2013), abridged, less expensive version of this work.

Bloom, B.S. and Krathwohl, D. R. (1956) Taxonomy of Educational Objectives: The Classification of Educational Goals, by a committee of college and university examiners. Handbook I: Cognitive Domain. NY, NY: Longmans, Green

Krathwohl, D. R. (2002) A Revision of Bloom’s Taxonomy. (PDF) in Theory into Practice. V 41. #4. Autumn, 2002. Ohio State University. Retrieved @

The Anderson/Krathwohl text has numerous examples of how these concepts can be used for K-12 teachers. Since I have used this material in my teaching (a special topics graduate course on taxonomies and their uses entitled Beyond Bloom’s,) and have also presented on this topic in several national conferences, I have artifacts and examples of how these revisions can be used effectively in college teaching. While I have a link above to an artifact, to be fully understood you might need to view the original assignment and the supportive documents. I would be happy to provide those and discuss them more fully.  I am always happy to share information with other educators.

Originally published in ED 721 (2001) course handbook, and at:

http://www4.uwsp.edu/education/lwilson/curric/newtaxonomy.htm (2001, 2005), revised 2013 

Topic 0020: Assessment vs Evaluation

What is the difference between “assessment” and “evaluation?”

Assessment is the process of objectively understanding the state or condition of a thing, by observation and measurement. Assessment of teaching means taking a measure of its effectiveness. “Formative” assessment is measurement for the purpose of improving it. “Summative” assessment is what we normally call “evaluation.”

Evaluation is the process of observing and measuring a thing for the purpose of judging it and of determining its “value,” either by comparison to similar things, or to a standard. Evaluation of teaching means passing judgment on it as part of an administrative process.

Ideally, a fair and comprehensive plan to evaluate teaching would incorporate many data points drawn from a broad array of teaching dimensions. Such a plan would include not only student surveys, but also self-assessments, documentation of instructional planning and design, evidence of scholarly activity to improve teaching, and most importantly, evidence of student learning outcomes.

Reference: http://www.itlal.org/?q=node/93

 

 

TOPIC 0019: Writing Your Literature Review

What is the literature review?

  1. A literature review summarises, critically analyses and evaluates previous research available on the subject, presenting this in an organised way. It should address a clearly articulated question or series of questions
  2. It is NOT:
    • A descriptive list or summaries of books/articles etc
    • An exhaustive bibliography on everything ever written on the topic- you need to make a decision about what to include
    • Your arguments and ideas (like an essay)

Why do we write a literature review?

  • Demonstrate an in-depth understanding of your topic area including key concepts, terminology, theories and definitions
  • Identify who the major thinkers are
  • Identify what research has been done in that area
  • Find gaps in the research or current areas of interest to help you formulate your own research question
  • Identify the main research methodologies in your subject area
  • Identify main areas of agreement or controversy
  • convince the reader that your research questions are significant, important and interesting
  • convince the reader that your thesis will make an original contribution to the area being investigated.

Steps to complete the literature review

  1. Find relevant literature on your topic and follow trails of references
  2. Identify themes/ideas/theories/approaches to the topic that have emerged from reading
  3. Introduce ideas by themes/theory/approach/chronologically or any other appropriate structure but do not just list different authors’ viewpoints
  4. Introduce and explain each theme (or theory/approach), present evidence from readings (agreements/ disagreements), critically commentate and relate to your own research

Critical Questioning

  1. Who is the author?
  2. What is the authors central point or main argument?
  3. What findings and conclusions are made?
  4. What evidence is used to support the conclusions?
  5. Is the evidence relevant? What methodology has the author used? What are the strengths and limitations?
  6. Does the author make any assumptions?
  7. What is not being said?
  8. Is there any explicit or hidden bias?
  9. How is the text relevant to YOUR project or assignment?
  10. How does this link with other texts that you have read?

(SYTHESIZING INFORMATION REFER TO; Topic 0007: Matrix Method for Literature Review – Approaches to Identify Research Gaps and Generate RQ)

Slide1.jpg
Figure 1 Structuring Literature Review

Topic  (broad to narrow)

Research Title: The Design and Developement of E-Portfolio for HIE’S in Social Sciences and Humanities

  • 2.1 Chapter Overview
  • 2.2 E-Learning in Malaysia
  • 2.3 E-Portfolio in HIE’s
  • 2.4 E-Portfolio Definition and Purpose
  • 2.5 E-Portfolio Reflective Learning Strategies
    • 2.5.1 Critical Thinking
    • 2.5.2 Problem-Solving
    • 2.5.3 Analytical Skills
  • 2.6 Conclusion and Gaps for Further Study

Critical Writing in a Literature Review

  1. Comparing and contrasting different theories, concepts etc and indicating the position you are taking for your own work
  2. Showing how limitations in others work creates a research gap for you.
  3. Strategic and selective referencing to support the underpinning arguments which form the basis of your research
  4. Synthesising and reformulating arguments from various sources to create new/more developed point of view
  5. .Agreeing with/defending a point of view or finding
  6. Accepting current viewpoints have some strengths but qualifying your position by highlighting weaknesses
  7. Rejecting a point of view with reasons (e.g. Lack of evidence)
  8. Making connections between sources

Adapted from RIDLEY, D 2008. The literature review: a step-by- step guide for students.  London: Sage


Topic 0018: Design and Development Methods and Strategies

Design and Development Methods & Strategies

Design and development research uses a wide variety of methodologies. Evaluation research techniques (both quantitative and qualitative) are also included in many studies.  The list of tables represent the methods commonly used in design and development research;

 Common Methods Employed in Design and Development Research

Type of Research Project Emphasis Research Method Employed
Product & Tool Research Comprehensive Design & Development Projects case study, content analysis, evaluation, field observation, in-depth interview
Product & Tool Research Phases of Design & Development Case study, content analysis, expert review, , field observation, in-depth interview, survey
Product & Tool Research Tool Development & Use Evaluation, expert review, in-depth interview, survey
Model Research Model Development Case study, delphi, in-depth interview, literature review, survey, think-aloud methods
Model Research Model Validation Experimental, expert review, in-depth interview
Model Research Model Use Case study, content analysis, field observation, in-depth interview, survey, think-aloud methods

Are DDR is MIXED METHODS?

The terms ‘mixed method research’ has been used to describe those studies that combine qualitative and quantitative methods. This is a way of using multiple approaches to answer given research questions.

Ross and Morisson (2004) support this trend when they take the position that qualitative and quantitative approaches are more useful when used together than when either is used alone and when combine, are likely to yield a richer and more valid understanding.

Johnson and Onwuegbuzie (2004) describe the areas of agreement between two advocates of both positions:

  • While individual perception are affected by one’s background experience, there are not different realities, but simply varying perceptions.
  • It possible for one data set to be compatible with different theories, i.e., there are alternative explanations of phenomena.
  • There is not always final proof of empirical research.
  • Researchers are bound by their values, attitudes and beliefs.

The fundamental agreement make mixed methods research not only practical in many situation, but also sound logically. However, in many design and development studies that employ multiple research methods are not mixing the qualitative and quantitative orientations. They are simply making use of a variety of similar strategies.

Topic 0017: Internet-Mediated Research (IMR)

Aspect

Description

State of art

Data-gathering stage that IMR methods have the biggest impact, as opposed to other stages and processes in the methodology of a piece of research, such as conceptualisation, interpretation, etc. (e.g.Hewson, 2008)

IMR approaches which seemed viable at this time included interviews, focus groups and observational studies which used linguistic data, and procedures for implementing these methods were devised and piloted (e.g. interviews: Chen & Hinton, 1999; Murray & Sixsmith, 1998; focus groups: Gaiser, 1997; Tse, 1999; Ward, 1999; linguistic observation: Bordia, 1996; Workman, 1992; Ward, 1999).

  • most relevant to qualitative social science research
  • cost and time efficiency,
  • expanded geographical reach, and
  • access to hard-to-reach

Tools, technologies, procedures

Synchronous approaches which gather data in real time

  • debate, clarification and resolution
  • more rigorous procedures
  • obtaining signed consent forms

Asynchronous approaches is to follow them in ‘real-time’ e.g, subscribe to a group and follow an ongoing discussion

Design issues and strategies

  • Solely text-based methods
  • Lack of physical proximity
    • facial expressions,
    • tone of voice,
    • body language, etc.

Depth and reflexivity

  • offering potentially more time for thoughtful, reflective responses
  • elicitation of richer,
  • more elaborate qualitative data.
  • enhance reflexivity and the depth and accuracy of interviewees’ responses Bowker & Tuffin (2004); Kenny (2005); McDermott & Roen (2012); Murray & Sixsmith (1998); O’Connor & Madge (2002).

Levels of rapport

Getting to know’ participants may help in maximising levels of confidence in the authenticity of the accounts they offer during the course of an interview.

Anonymity, disclosure, social desirability

  • high degree of privacy (Hewson et al, 2003)
  • higher levels of self-disclosure than ftf contexts (e.g. Bargh, McKenna & Fitzsimons, 2002; Joinson, 2001)

Principles for good practice in qualitative internet-mediated interview and focus group research.

  1. Use robust, well-tested procedures which have been well-piloted (reliability).
  2. Use the simplest low-tech solutions and equipment available that will do the job (reliability, accessibility).
  3. Use appropriate procedures for verifying identity (e.g. offline, audio / video) where this is crucial (e.g. highly sensitive research) (ethics, validity).
  4. Adopting clear strategies for establishing rapport has been shown to work well and is advisable (validity).
  5. Remain mindful of potential trade-offs when deciding upon procedures and making design choices, e.g. asynchronous approaches may facilitate depth and reflexivity but reduce conversational ‘flow’ (validity).
  6. Related to the above principle, remain aware of possibilities for combining methods, e.g. online and offline, asynchronous and synchronous, etc. (validity).
  7. Carefully consider security and confidentiality risks when making sampling and procedural design choices, and the ethical responsibility to inform potential participants of any non-trivial risks they may be unaware of (ethics, validity).
  8. Adopt procedures for maximising security and confidentiality where possible (e.g. setting up a dedicated online research site) (ethics, validity).
  9. Remain mindful of the possible threats to participant confidentiality and anonymity that can emerge from dissemination and publication procedures, and take careful measures to minimise these threats (ethics).
  10. Respect standards of privacy and netiquette, and pass participation requests through moderators where appropriate (e.g. sampling from mailing lists or newsgroups) (ethics).
  11. Make sure participation requests are well-constructed, containing information on researcher affiliations, contact details for further information, and value of the research (ethics, validity).
  12. Carefully consider how different sampling approaches and design procedures may facilitate or restrict access by different groups (accessibility, ethics).

Principles for good practice in qualitative internet-mediated observation and document analysis research.

  1. Keep in mind that different observation sites / sources may restrict or facilitate the design options available (e.g. using archived logs precludes participant approaches and makes disclosure / consent often implausible; observing real-time chat makes undisclosed, nonparticipant observation often untenable) (ethics, validity).
  2. Keep in mind the different types of dialogue and interaction that may be encouraged by synchronous (e.g playful) and asynchronous (e.g. reflective) technologies when selecting which is most appropriate (validity).
  3. Carefully consider whether undisclosed approaches are ethically justifiable, keeping in mind the following key factors: privacy expectations; sensitivity of data; levels of risk of harm; legal and copyright regulations; scientific value (ethics).
  4. Keep in mind that trade-offs will often emerge, especially in relation to ethics procedures, e.g. disclosure may increase the risk of reducing data authenticity and validity, but also reduce the risk of harming a group (ethics, validity).
  5. Keep in mind that it is often good practice to consult moderators before carrying out observation (e.g. of online discussion groups), particularly where disclosed approaches are proposed; however, moderators may also have agendas and opinions which could be prohibitive to the research, and in some not making contact is arguably justified (ethics, validity).
  6. Remain mindful of the increased traceability of data sources online, and the potential associated threats to anonymity / confidentiality, particularly in devising dissemination / publication strategies (ethics).
  7. Take steps to maximise data security, especially when utilising less secure technologies such as email (e.g. in soliciting documents) (ethics).
External Links

 

Topic 0016: Differentiate between Quantitative Research vs Qualitatitave Research

There are the different between Quantitative Research vs Qualitatitave Research;

Aspect

Quantitative

Qualitative

Definition

Concerned with discovering fact about social phenomena. Concerned with understanding human behaviour form infromants perspective.

Focus

Assumes and fixed and measurable reality Assumes a dynamic and negotiated reality

Purpose

  • To test theories
  • To construct facts
  • To differentiate
  • To correlate
  • To predict human behaviour
  • To explain statiscally
  • To complete the theories
  • To enhance knowledge
  • To explain facts
  • To explain the phenomena

Research Methods

  • Experimental
  • Quasi Experimental
  • Structure Interview
  • Structure Observation
  • Survey
  • Correlation

 

  • Observation
  • Non-structure interview
  • Document analysis
  • Case Study
  • Ethnography
  • Narrative
  • Grounded Theory

Strategy

  • Formal Interview
  • Structured
  • Survey
  • Documentation
  • Observation
  • Informal Interview
  • Non-Structure
  • Questionnaires
  • Audio Visual

 

Analisis Data

  • Raw data by number
  • Statistical
  • Deductive
  • Programming software such SPSS, SEM-AMOS, SMART-PLS etc.
  • Raw data by verbatim
  • Continous analysis
  • Not a statistical
  • Inductive
  • Programming software such Atlas.ti, Nvivo etc.

Data Interpretation

Generalization Summarization

Problem

  • Variables
  • Validity

 

  •  Time-consuming
  • Reliability
  • No standardized procedure
  • Not for population

Terminology

  • Explain
  • Variables
  • Hipotesis
  • Conceptual
  • Generalization
  • Explore
  • Element
  • Category
  • Theme
  • Patent
  • Model
  • Framework

Sampling Method

Probalility Sampling (Creswell, 2005), Kejcie and Morgan, 1970)

Refer Topic 0013: Choosing a Sampling Method

Non-Probalility Sampling  (Miles and Huberman,1994 dan Creswell, 2005)

Topic 0015: Need Assessment

Definition of Key Terms

“Need” refers to the gap or discrepancy between a present state (what is) and a desired state (what should be). The need is neither the present nor the future state; it is the gap between them.

“Target Group” Needs Assessments are focused on particular target groups in a system. Common target groups in education settings include students, parents, teachers, administrators, and the community at-large. Ideally, needs assessments are initially conducted to determine the needs of the people for whom the population or system. However, a “comprehensive” needs assessment often takes into account needs identified in other parts of a system. For example, a needs assessment might include the concerns of the “service providers” (e.g. teachers, guidance counselors, or school principals—the people who have a direct relationship with the service receivers) or “system issues” (e.g., availability of programs, services, and personnel; level of program coordination; and access to appropriate facilities).

A “Needs Assessment” is a systematic approach that progresses through a defined series of phases. Needs Assessment focuses on the ends (i.e., outcomes) to be attained, rather than the means (i.e., process). For example, reading achievement is an outcome whereas reading instruction is a means toward that end. It gathers data by means of established procedures and methods designed for specific purposes. The kinds and scope of methods are selected to fit the purposes and context of the needs assessment. Needs assessment sets priorities and determines criteria for solutions so that planners and managers can make sound decisions. Needs assessment sets criteria for determining how best to allocate available money, people, facilities, and other resources. Needs assessment leads to action that will improve programs, services, organizational structure and operations, or a combination of these elements.

Steps

Let’s take a quick look at general steps taken in a needs assessment.

  1. Exploration and identification. During the first phase of the needs assessment, you need to determine what you already know about your organization’s needs, whether it be additional resources, new technologies, or market expansion. It’s about figuring out where you are and where you want to be. You also need to discover other undisclosed needs that may be hindering you from moving from where you are to where you want to be. You will often rank these needs in order of importance. You will then set the scope of your research. In other words, the needs you are going to focus upon.
  2. Data gathering and analysis. At this stage you are collecting the information you need to better understand the gaps (needs) between where you are and where you want to be. Data may be collected from internal or externally records through research techniques such as surveys and document-analysis. After the data is collected, it is organized and analyzed.
  3. Utilization. This is where the data you analyzed is used to create a plan of action and implement it. You will set priorities, evaluate solutions, apply a cost-benefit analysis to determine which solution is best in light of the relative costs and benefits of each, formulate a plan to implement your solution, and then allocate the resources necessary for implementation. Again, the goal is to develop a plan to close the gaps between the desired future state and its current state.
  4. Evaluation.  You will evaluate the results of the action plan against the results: has the action plan placed you closer to where you want to be? Evaluation can help you determine what made an action plan successful or find the errors in your needs assessment. For example, did you miss an important gap, or were the resources you allocated insufficient to close the gap?

What Watkins Say?

Watkins describes analysis as “a process for breaking something down to understand its component parts and their relationships” (R. Watkins, personal communication, June 13, 2016). As to concluded, “A needs assessment identifies gaps between current and desired results and places those [gaps] in priority order on the basis of the costs to ignore the needs. They also stress that the gap is the need.

EDITOR'S PICK  NEEDS ASSESSMENT FIGURE 2.png

Figure 1: Needs Assessment vs. Needs Analysis Concept Map V2.0

 


Source adapt and taken from:

  1. http://study.com/academy/lesson/what-is-needs-assessment-definition-examples-quiz.html
  2. https://www2.ed.gov/admins/lead/account/compneedsassessment.pdf
  3. http://www.ispi.org/ISPI/Resources/PX/Articles/Editors__Pick/Needs_Assessment_vs_Needs_Analysis__What_s_the_Diff_.aspx