Workshops

Boost Your Learning and Connect with Experts in Our Exclusive Workshops!

As part of our congress, we invite you to participate in a series of interactive and practical workshops designed to take your skills to the next level. Immerse yourself in dynamic sessions and connect with passionate professionals.

  • Empowering scientists through
    replicability: How to navigate and
    apply the credibility revolution
    (Dr. Oscar Lecuona).
    Places: 1-30. Time: 3 hours
Workshop resume

This workshop is focused on training and empowering scientists in the recent reforms proposed in the context of the “credibility revolution”. We seek to raise awareness and empower researchers to produce and consume more credible science and a better scientific community to meet the challenges of the 21st century. These reforms are implemented in contrast for the “replication crisis” phenomenon, scientific malpractice and the perverse incentive structure of scientific publishing. We briefly review these events and provide guidelines of their main characteristics, provide checklists for questionable research practices (QRP), and in contrast, best research practices (BRP) to overcome these issues, including open science practices. Then, we will present some published articles with some degree of BRPs and QRPs to train assistants in the use of these checklists. Then, we will propose to the assistants bring their own research questions or choose among some examples to start a pre-registration through the Open Science Framework and other services. We will overview the specifications needed for an effective pre-registration. More concretely, we will cover how to avoid and deal with potential QRP and engage in BRP. More technical aspects will be covered like a short introduction to power analysis for sample size planning, psychometric instruments selection, procedures and data analyses to endorse. Open science practices will be endorsed and applied through FAIR criteria and services like the Open Science Framework. Finally, we will cover when and how to deal with obstacles and deviations from pre-registrations. In doing so, we aim to educate the research community by empowering assistants as experts inserted in research teams, and as researchers on their own. The attendees will learn about the replication crisis and the credibility revolution’s reforms. In addition, to perform a pre-registration, standard power analysis with GPower, and engage in
open science practices using the Open Science Framework. Attendees will have knowledge on the replication crisis and a clear framework of reforms to overcome it (the “credibility revolution”), along with practical skills on how to perform a preregistration with rigor and subset skills. These subset skills include power analysis and sample size planning, open science practices, and how to present a preregistered manuscript with clear rationale on specific steps and deviations if encountered. The workshop is mainly practical, with short presentations followed by interactive exercises with their own projects or given examples. Dr. Oscar Lecuona, Guido Corradi, Ariadna Angulo-Brunet are Assistant Professors at UCM, UIB, and UOC respectively. Dr. García-Garzón is a data scientist at Shakers SL. Their research fields include several contributions in high-impact journals about the replication crisis and the credibility revolution. All have contributed to several projects implementing best current practices regarding replicability and credibility, like pre-registrations, open science practices, replication studies, multi-country studies, and others. In addition, Drs. Lecuona and Corradi have lectured about these topics to undergraduate and graduate students in several universities, like Universidad Complutense de Madrid, Universidad Villanueva, and Universidad Rey Juan Carlos.


  • Solving the problems of ipsative data: Designing and scoring forced-choice and other questionnaires in comparative format (Professor Anna Brown).
    Places: 15-30. Time: 5 hours
Workshop resume

To prevent response biases associated with the use of rating scales, test items may be presented as comparative judgements. These include the popular ‘forced choice’ where respondents rank two or more stimuli. The extent of preference can also be expressed, for example by selecting ‘grades of preference’ using categories such as “much more” or “slightly more”, or ‘proportions-of-total’ distributing a fixed number of points between several stimuli. Responses collected with such formats are relative within the person, leading to major psychometric challenges – interpersonally incomparable (ipsative) data. Since measurement of individual differences requires absolute position on the traits of interest, appropriate methods of scaling ipsative data are required. This workshop will introduce participants to state-of-the-art methods for analyzing and scoring comparative judgments and provide recommendations for designing effective comparative measures. I will focus on the Thurstonian factor-analytic approach, applicable to all types of ipsative data – binary, ordinal and ratio. The Thurstonian family includes the TIRT model for choice and ranking (Brown C Maydeu-Olivares, 2011), the compositional model for percentage-of-total data (Brown, 2016) and the ordinal IRT model for graded-preference data (Brown C Maydeu-Olivares, 2017). This unified approach will be demonstrated with empirical data analysis examples, including well-known personality questionnaires. Who should attend? The workshop is intended for researchers and practitioners involved with design, analysis, implementation and use of comparative measures. The workshop will have a strong theoretical component, but with plenty of empirical examples. Pre- requisites: Participants should be familiar with factor analysis and item response theory. Some experience of fitting CFA models is recommended but not essential. Workshop outline • Part 1 – Psychology and Mathematics of comparative judgements. We will start by reviewing types of comparative judgements (forced choice; Ǫ-sorts; graded preferences; proportions-of-total) and note their differences form absolute judgements and challenges in scaling them. We then introduce psychological models for choice and focus on Thurstone’s Law of Comparative Judgements and how it can be applied to link the choices people make to their personality attributes. We then see how we can use the same approach to model responses to graded preferences and compositions to provide proper scaling on attributes of interest. • Part 2 – Methods for deriving scale scores on personality attributes and their properties. We will outline “the new rules of comparative measurement” and move to learning how to create informative comparisons. We consider questionnaire designs including item writing and grouping items for comparison and emphasize considerations for maximizing information as well as for preventing response biases. This part will have a substantial practical component, where participants may follow an empirical example provided and apply provided R functions to specify a measurement model, test it using a supplied sample of test responses, and score respondents. • Part 3 – Applications of comparative modelling to practical problems. We will consider examples of well-known assessments and some new ones. To conclude, we will summarize benefits and limitations of comparative judgements and discuss future directions for research and practice.


  • Cognitive Diagnosis Modeling: A General Framework Approach, Its Implementation in R, and Some Recent Developments (Dr. Jimmy de la
    Torre and Dr. Miguel A. Sorrel).
    Places: 5-30. Time: 6 hours and 30 minutes
Workshop resume

The short course introduces cognitive diagnosis modeling (CDM) as an alternative psychometric framework for developing assessments and analyzing item-response data. The models in this framework are designed to generate diagnostic output that classifies individuals into a discrete profile of latent attributes. In addition to the rationale, foundations, and frameworks for CDM, the course covers recent developments in the area, as well as tools for implementing these analyses using R. The primary aim is to provide participants with the background needed to appreciate the use of CDMs in various applied settings and highlight the theoretical foundations necessary for proper CDM implementation. The course will cover the fundamentals of CDM and recent developments, including approaches and models for cognitive diagnosis; model estimation, fit evaluation, and comparison; Ǫ-matrix validation; model identifiability; and more. Participants will also be introduced to CDM-related R packages developed by the instructors (GDINA, cdmTools, and cdcatR). Several step-by-step examples will illustrate how these analyses can be implemented in R. This course is designed for applied and theoretical practitioners/researchers interested in diagnostic and formative assessments in educational and psychological fields. Participants should have a basic understanding of psychometric theory (e.g., CTT, IRT) and some familiarity with R programming. By the end of the course, participants will: (1) be familiar with major models and approaches to diagnostic modeling; (2) understand the issues in developing and analyzing cognitively diagnostic assessments; (3) be aware of recent research in CDM; and (4) have a clear understanding of the capabilities of several R packages for CDM analysis. The workshop will be structured as follows: Part 1: Introduction (diagnostic modeling and G-DINA frameworks) Part 2: CDM analysis with R (calibration, fit evaluation, Ǫ- matrix validation, differential item functioning, classification accuracy, and methods for small sample sizes) Part 3: Recent areas of development (adaptive testing and new model developments). Each section presents content and includes hands-on exercises. For the R sessions, a handout with detailed code explanations, codes, and datasets will be provided for practice during the session. Dr. Jimmy de la Torre, Professor at the Faculty of Education, The University of Hong Kong, is a leading expert in cognitive diagnosis models (CDMs). His contributions include developing models, estimation codes, and a general framework for model estimation, comparison, and Ǫ-matrix validation. Among other honors, he is a recipient of several awards, including the Presidential Early Career Award for Scientists and Engineers (2009), the Jason Millman Promising Measurement Scholar Award (2009), and the Bradley Hanson Award (2017). He has conducted close to four dozens CDM workshops in 15 countries and four continents. Dr. Miguel A. Sorrel, Associate Professor at the Autonomous University of Madrid, focuses on measurement in behavioral and health sciences, including psychometric models, model fit assessment, and computerized adaptive testing. He received the Young Methodologist EAM Award (2016) and the Early Career Award from the International Association for Computerized Adaptive Testing (2022). Mr. Diego Iglesias, doctoral candidate at the Autonomous University of Madrid, focuses on improving prediction and generalization in Psychology using statistical models and cross-validation techniques.


  • New Perspectives on Measurement Invariance Testing (David Goretzko).
    Places: 5-40. Time: 6 hours
Workshop resume

In this course, we will provide an overview of newly developed approaches to analyze measurement invariance and discuss a causal inference framework that allows researchers to reason about how certain covariates impact their measurement models. We will start with a brief recap of common methods for measurement invariance testing and important concepts and definitions in this context. In the second part of the course, we will discuss new methodological developments in measurement invariance testing that address different shortcomings of popular strategies such as invariance testing with multigroup CFA. A specific focus will be on recently developed exploratory approaches such as exploratory factor analysis trees (EFA trees; Sterner C Goretzko, 2023) and mixture multi-group EFA (De Roover, 2021; De Roover et al., 2022). These methods can be used to more thoroughly investigate metric (weak) invariance, with regard to measured and unmeasured covariates, respectively. We will also discuss how these EFA-based approaches might be combined with subsequent CFA-based analyses to ultimately establish scalar (strong) invariance. Afterwards, we will introduce a causal inference framework for studying measurement invariance based on directed acyclic graphs that can be used to conceptualize the measurement process as well as potential influencing factors (e.g., cultural background) that render it non-invariant (e.g., Sterner, Pargent, Deffner, C Goretzko, 2024). Based on theoretical considerations and empirical findings, a causal model can be defined for the measurement process of interest. Such a causal model allows researchers to a) communicate their assumptions about (non-) invariance across specific groups and potentially biasing influences of certain covariates, and b) to derive proper modeling strategies to account for moderating effects of these covariates ensuring meaningful latent mean comparisons. The course lectures will be accompanied by practical exercises, in which participants learn how to apply some of the newly developed methods for measurement invariance assessment as well as how to conceptualize non- invariance in a causal model. For these exercises we will mainly use the open- source software R, but the course materials and lecture components are designed in a software-independent manner, so that participants without deep R knowledge can follow as well.


  • Modelling Responses in Multi-Item Measurements with R and Shiny (Patrícia Martinková).
    Places: 5-40. Time: 6 hours
Workshop resume

Item response analysis is crucial for developing high-quality educational and psychological assessments. It not only provides valuable insight into respondent behaviors or student performance but also informs evidence-based policies. Over the years, various methods have been proposed for modeling item responses based on respondent data. Recently, more complex data, such as item text wording, have also been harnessed with numerous analytical methods. This workshop equips participants with a deeper understanding of item response analysis and practical skills for performing it using traditional methods, regression models, item response theory (IRT), and machine learning techniques. We will start with a step-by-step development of IRT models, illustrating their relationships with traditional item characteristics and simpler regression models. Next, we will explore differential item functioning and measurement invariance – key concepts for detecting potentially biased items and a detailed understanding of student performance and respondent behavior across different social groups. The final part will be devoted to item text analysis using machine learning methods. The workshop follows selected chapters from “Computational Aspects of Psychometric Methods: With R”, authored by the instructors and published by CRC Press/Chapman C Hall in 2023. The course lectures will be complemented by practical exercises, allowing participants to apply the presented techniques using R, a free and open-source statistical software. We will utilize the ShinyItemAnalysis and difNLR packages, along with other R packages. Moreover, an interactive ShinyItemAnalysis application and its add-on modules will be used for hands-on training, enabling the participants to perform all necessary analyses in a user-friendly environment. Before the course, the participants will receive detailed instructions on how to install the necessary software. Previous experience with R is a plus, but the course is designed to be accessible even for R novices. References: Martinková, P., C Hladká, A. (2023). Computational Aspects of Psychometric Methods: With R. Chapman and Hall/CRC. https://doi.org/10.1201/9781003054313


Limited Spots!

Don’t miss the opportunity to participate in these exclusive workshops. Places are limited, so be sure to reserve your spot in advance.