The ATP ECPS Summit will broadcast live on December 8th and 9th, 2021.
11:00 AM - 12:00 PM EST | Opening Keynote: Strengthening Instruction and Assessment using Education Technology
As more schools resume in-person instruction, educators continue to use education technology to support student achievement. In this address, Rob Waldron, CEO of Curriculum Associates, will discuss how high-quality technologies can make it more practical for teachers to reconnect with their students and reshape the way they deliver instruction. He will also highlight how psychometric work is vital to developing and supporting curriculum and assessment tools in order to address long-standing educational disparities.
Speaker: Rob Waldron, CEO, Curriculum Associates
12:15 PM - 1:00 PM EST | Developing Equitable and Fair Learning Products: A Discussion of Ethical AI in EdTech
A.I., machine learning, and automation are transforming the way we learn. These new technologies hold tremendous promise in democratizing education, by providing learners the tools to gain new skills anytime, anywhere. While A.I. promises to transcend human limitations and increase productivity, it also poses the danger of perpetuating societal biases and amplifying them on a grand scale. How should EdTech providers develop learning products that are ethical and fair? How do we ensure that our A.I. creations reflect human values? In this session, leaders in A.I., ethics, and education equity will review the current state of A.I. in education, share ethical frameworks in EdTech development, and discuss challenges in the field.
Speakers: Ada Woo (Moderator), Vice President, Innovative Learning Sciences, Ascend Learning; Daniel Clay, Dean, College of Education, University of Iowa; Ken Johnston, Principal Data Science and Data Engineering Manager, Microsoft; Yohan Lee, Chief Strategy Officer, Riiid Labs; Steve Dept, Founding Partner and Strategic Partnerships Director, cApStAn
1:30 PM - 2:30 PM EST | EdTech Demonstrations
An EdTech Demonstration is a demonstration or presentation showcasing the latest technologies, products, services, or solutions in the testing industry within a company. All EdTech Demos will be live during this one-hour session.
2:45 PM - 3:30 PM EST | Learning in the Digital Age: EdTech Empowered Remote Instructions
The COVID-19 pandemic and the need for social distancing have disrupted learning on a massive scale. Learners ranging from pre-K to professionals experienced learning challenges to varying degrees. Technology such as simulation, game-based learning, and data-driven differentiated instructions are empowering teachers in the remote learning environment. How are these education innovations filling the gap? How will they continue to support learners achievement after we return to the classroom? In this session, experts in K-12 and workforce education will discuss the unique needs in these learning sectors. They will also share solutions and lessons learned during the pandemic.
Presenters: Amy Riker (Moderator), Regional Vice President, Government Relations, Curriculum Associates; Jenn McNamara, Vice President, Serious Game, Breakaway; Kristen Huff, Vice President, Assessment and Research, Curriculum Associates ; Christine Erickson, Deputy General Counsel, Ascend Learning
3:45 PM - 4:45 PM EST | Networking Event with Duolingo
Learn more about the Duolingo English Test mission & research.
Speakers: Jill Burstein & Geoff LaFlair
11:00 AM - 11:45 AM EST | Keynote: Games, Learning, and Assessments: Driving Engagement and Results
Educational assessment experts have long been experimenting with using game-based and simulation-based tasks in student assessment, yet few operational assessments leverage the distinctive advantages games and simulations in setting up measurement opportunities that target hard-to-measure skills and constructs. Rebecca will discuss her work with the Imbellus team developing cognitive skills assessments and will share her team's next chapter of assessment and education work at Roblox.
Speaker: Rebecca Kantar, Head of Education, Roblox
12:00 PM - 1:00 PM EST | Invited Sessions
We are excited to announce that this invited session will be moderated by Ardeshir Geranpayeh, Cambridge Assessment English. These sessions will consist of 15-minute technical presentations by each presenter, followed by a 10 minute Q&A as a group.
Invited Session 1: Generalization Challenges in Measurement of Collaborative Knowledge Building
Transactivity is a common operationalization of effective collaborative knowledge building and a valued collaborative process. In past work it has been associated with elevated learning gains, collaborative product quality, and knowledge transfer within teams. Dynamic forms of collaboration support have made use of real time monitoring of transactivity, and automation of its analysis has been affirmed as valuable to the field. Early models were able to achieve high reliability within restricted domains. More recent approaches have achieved a level of generality across learning domains. In recent work, we have investigated generalizability of models developed for a general adult population to a more highly skilled adult population. Population differences prompted both a reformulation of the operationalization and innovation in its automated measurement. Reflecting on this endeavor provides insights into opportunities and challenges in automated measurement of collaborative knowledge building across populations.
Speaker: Carolyn Penstein Rose, Professor of Language Technologies and Human-Computer Interaction, School of Computer Science, Carnegie Mellon University
Invited Session 2: The Crossroads of AI Fairness, Accountability, and Transparency
Modern machine learning techniques contribute to the massive automation of the data-driven decision making and decision support. Multiple examples from different industries, healthcare, education, and government illustrate the challenges of developing and making use of trustworthy and human-centered AI. It becomes better understood and accepted that employed predictive models may need to be audited. Disregarding whether we deal with so-called black-box models (e.g. deep learning) or more interpretable models (e.g. decision trees), answering even basic questions like “why is this model giving this answer?” and “how do particular features affect the model output?” is nontrivial. In reality, auditors need tools not just to explain the decision logic of an algorithm, but also to uncover and characterize undesired or unlawful biases in predictive model performance. In this talk I will give a brief overview of the different facets of comprehensibility of (non-discrimination in) predictive analytics and reflect on the current state-of-the-art and further research needed for gaining a deeper understanding of what it means for predictive analytics to be truly transparent, fair and accountable. I will also reflect on the necessity to study utility of the methods for interpretable predictive analytics.
Speaker: Mykola Pechenizkiy, Professor of Data Mining, Department of Mathematics and Computer Science, TU Eindhoven
Invited Session 3: A Short Introduction to Network Psychometrics
Network psychometrics comprises a novel set of psychometric techniques, in which constructs are characterized as networks of attributes. For example, clinical constructs like depression can be thought of as sets of symptoms that sustain each other; attitudes can be viewed as systems of mutually supportive feelings, behaviors, and cognitions; abilities can be seen as sets of interlocking skills. In network psychometrics, such components of a construct are represented as nodes, while relations between components of a construct are represented as links that connect these nodes. Given appropriate assumptions, recently developed techniques allow such networks to be estimated from data, which provides new ways to visualize, conceptualize, and analyze psychometric questions. In this talk, I will give an overview the basic idea behind network psychometrics, offer a sketch of new possibilities that arise within network theory, and discuss how the approach relates to standard psychometric theory.
Speaker: Denny Borsboom, Professor of Psychology, University of Amsterdam
1:15 PM - 2:00 PM EST | Peas in a Pod Discussions
Peas in a Pod Discussions are informal, face-to-face conversations with fellow conference goers who share common interests. These sessions are all about direct engagement and exploration of ideas.
Ascend Learning | Implementing Innovation When it May Feel Not Quite Ready for Production
Have you been hearing demands from clients and business partners for a new adaptive learning system? Have your business partners or clients asked you about implementing an AI solution for remediation? Often, organizations are caught between implementing new technology and finding a psychometrically sound option that will meet the IT technical requirements for their organization. Join your hosts as they facilitate a conversation about the considerations psychometrics and IT Architecture groups take into account to meet business and client needs once an agreed upon path for innovation has been approved.
Speaker: Christine Mills, Director of Research and Applied Psychometrics, Ascend Learning
Duolingo English Test | Item Parameter Estimation in a CAT Setting
Principal Duolingo English Test Assessment Scientist Dr. J.R. Lockwood will lead a conversation about our approach to item parameter estimation in a computer adaptive test (CAT) setting.
Speaker: Dr. J.R. Lockwood, Principal Assessment Scientist at Duolingo
Making Multi-Mode Delivery Work Worldwide
Test sponsors are evaluating multi-mode delivery due to the impact of COVID-19. Last year programs introduced test takers to remote proctoring, while still offering candidates the option to test in-person (where permitted). The logistics of a multi-mode delivery can become complex for the test sponsor-- but also confusing for the candidate. During the Peas in a Pod Discussion, we will explore the following ideas:
1) Creating guidelines, handbooks, and other materials to manage expectations to candidates
2) Researching what considerations to make when creating test content and selecting test vendors
3) Reviewing technology/software requirements that can assist (or block) the success of a multi-mode delivery process.
Speaker: Brodie Wise, Internet Testing Systems & Jeffrey Spranza, Internet Testing Systems
Advancing Equity in Assessment: Opportunities and Challenges in the Digital Era
Equity and fairness have come to the forefront of concerns in high stakes assessment, particularly where technology, including AI, is used to make predictions or decisions about people. Questions are being raised about the fairness of tests and potential bias against various demographic subgroups, such as race/ethnicity, gender, culture, native language, and national origin, as well as individuals with disabilities. In this session, experts in testing and technology will discuss fundamental concepts of equity, test fairness and bias, and their roles in addressing challenges in Europe to develop solutions for assuring equity in the use of assessments.
Questions to be discussed include:
- What do we mean by equity in testing? How are concepts such as fairness and bias related?
- What is bias in testing? How is it measured? Can it be controlled?
- What solutions are organizations adopting to address equity and diversity concerns?
- What should the industry do, particularly in Europe, to advance equity in assessment?
Speaker: John Weiner, PSI Services; Andre Allen, Fifth Theory; John Kleeman, Questionmark
2:15 PM - 2:45 PM EST | To Be or Not to Be...Relevant in Assessment: Computational Psychometrics for a New Generation of Assessments
The term ‘computational psychometrics’ was introduced in 2015 to describe a new approach to measurement that includes machine learning algorithms to take advantage of the rich and diverse data types generated from innovative digital assessments. The first published work using computational psychometrics examined process data, which is associated with the way test takers respond to questions (such as navigational patterns and response time).
Since its initial implementation, computational psychometrics has defined the way in which test development work has evolved alongside technology. For instance, this includes the transition to the development of the AI-based automatic item generation and automatic scoring, and their integration into a psychometric framework, the management of database alignment, multimodal data, ancillary data, and multiple data sources, the development of interactive and collaborative assessments, and last but not least, the transition to embedding assessments into personalized learning systems. In this presentation, the speaker will give an overview of computational psychometrics and how it contributes to assessment innovation.
Speaker: Alina von Davier, Chief of Assessment at Duolingo
3:00 PM - 4:00 PM EST | Closing Panel: Innovation versus Regulation for AI in Education: A Balancing Act?
The omnipresent of AI has been described as the 21st Century’s “electricity.” Such a lofty characterization points to the expansive reach of AI in society. For educational policy and decision makers, AI applications present both a challenge and opportunity. AI educational solutions can help address the unevenness in the experience of learners and present new challenges for fairness, privacy, and equity. A key challenge for policy makers is to build a regulatory framework that encourages innovations and responsible use of AI educational services and products. In this session policy analysts discuss the outlook for AI as a transformative force in education and the way for AI to advance in today’s increasingly regulatory environment.
Speakers: William G. Harris, CEO, ATP; Andreas Schleicher, Director for the Directorate of Education and Skills, OECD; Ansgar Koene, Global AI Ethics and Regulatory Leader, Ernest & Young; Alina von Davier (moderator), Chief of Assessment at Duolingo; Deborah Quazzo, Managing Partner GSV Ventures and Co-Founder ASU GSV Summit, 15
* Please note: Schedule is tentative. Check back regularly for any updates.
The 2021 ATP ECPS will be a two-day virtual summit taking place on December 8th and December 9th. This engaging event will include keynotes, panel discussions, edtech demos, and interactive sessions.Learn more!
Looking for an exclusive way to reach the niche audience of assessment professionals, high-profile educational institutions and high-tech companies? Sponsor ATP ECPS and connect with industry leading companies.Customize your sponsorship package!