The adoption of educational technology has grown significantly over the past decade and it is clear that primary and secondary schools are now comfortable with and embracing new technological norms. The next step for school leaders is to focus on strategic purchasing of educational technology, ensuring that these tools really make a positive difference to teaching and learning.
But effectively evaluating edtech products is no easy feat. Districts must balance diverse needs, ensure data privacy, and align technology initiatives with educational goals. The process involves navigating budget constraints, integrating new tools with existing systems, and ensuring accessibility for all students. To shed light on how districts are addressing these challenges, EdSurge spoke with three leaders in edtech.
Susan Uram, Director of Educational technology at Rockford Public Schools At Illinois, she leverages her experience as a classroom teacher, curriculum dean, and instructional coach to bridge the gap between IT initiatives and classroom instruction. April Chamberlain, Illinois technology and Library Supervisor, Trussville City Schools In Alabama, she also began her career in the classroom before taking on a pivotal role in aligning technology initiatives with educational needs. Jessica Peters, director of personalized learning at KIPP DC Public Schoolsoversees the integration of educational technology across 22 schools, leveraging her experience as a classroom teacher and educational technology trainer to implement effective educational technology solutions.
Together, they provide valuable insight into the challenges and strategies surrounding the acquisition and implementation of educational technology in their districts, including a shared enthusiasm for their participation in the Benchmark project. Benchmark, an ISTE research project with funding from the Walton Family Foundation and the Chan Zuckerberg Initiative, aims to support districts trying to improve the ways they assess, measure, and report on student progress based on their needs and contexts. As part of the Benchmark project, ISTE worked with six public school districts across the United States to explore issues of practice related to assessment and assessment selection within their districts.
EdSurge: How does your district approach the evaluation and selection of educational technology products? And what makes the procurement process challenging?
Mister: Rockford Public Schools is a relatively large district with 27,000 students. We strive to balance the different needs of each school and a high mobility rate of nearly 20 percent within the district. As such, we try to respect the professional decisions of our educators while providing a consistent education and experiences for families across the district.
When we receive a new edtech product request, we have checkpoints to assess whether the tool meets our needs. Does it duplicate something that already exists? How is this tool different or better? Would a pilot provide a genuine test? (Product evaluation) is not just about whether teachers or students like the tool. It needs to be a product that is worth investing time and effort into learning how to use it effectively.
Chamberlain: We ask those same kinds of questions. Our state has a multi-year program that helps us assess our current resources to decide if we need to recalibrate, eliminate, or add something new. We use a multi-tiered support system (MTSS), so it is important, but challenging, to have all seats at the table (all stakeholders) represented when reviewing educational technology.
Over the past school year, we audited the district’s programs, initiatives, and projects. We had representatives from technology, student services, administration, counseling, and curriculum in the room for the district meeting. Then, the principals went around and conducted similar audits at the building level. First, we listed all of the edtech products that teachers use, both instructional and operational, which revealed some surprises. Then, we categorized these resources by subject areas like English, math, behavioral wellness, or core, and broke them down further into the environment each product serves: Tier 1, 2, or 3. This allowed us to see gaps and overlaps with edtech products.
Moving forward, we have a form that teachers must complete to request a new product. The teacher answers questions about the tool, such as technical details and how it aligns with instruction or improvement. That completed form is sent to the school’s technical team, who analyzes the product and compares it to what we know is already being used at the school and district level. Once approved at the school level, we move on to pilot testing to determine if there is sustained value for other settings in the school or district to implement the new product.
Pedro: KIPP DC has a few checkpoints in place. Halfway through the school year, around January or February when budget planning begins, I do a cursory analysis of all of our current products to identify those that are underutilized, ineffective, or redundant. Our pilot program is generally very open to requests, though we say no to some things if they are extremely duplicative. Each summer, we conduct a thorough effectiveness analysis of all core and pilot products. Occasionally, some products slip through our data review due to KIPP Foundation initiatives or strong endorsements from top educational leaders, and we have to adapt accordingly.
How can the edtech-product-selection” target=”_blank” rel=”noopener nofollow”>Teacher Ready Framework and Assessment Tool Support educators and district leaders in evaluating and selecting educational technology products?
Pedro: The tool is much more comprehensive than anything we've used so far and addresses almost every question we could think of. If we were to analyze the tool for each product, I think there would be a lot more confidence that the product is, in fact, appropriate for us to use and meets all of our standards. It's a heavy tool, so working through the entire framework is time-consuming and really not something you could ask your average teacher or school principal to do. But I think it's great for district-level assessment.
Mister: As soon as the COVID pandemic began, we were overwhelmed by the thousands of products that educators were using. We needed better language, a framework to address all the products. The tool helped cut through all the fluff that a vendor might say about the product and ask questions like, “What are the accessibility features? Where are they located? Is there interoperability?” It makes the assessment more fact-based and removes feelings and opinions.
The tool contains a lot of questions, so we've grouped parts of the framework together and provided guiding questions based on those parts. If a product passes those questions, we can dig a little deeper. (The tool) has helped us take a deep breath when we see a shiny new product before we buy it.
Chamberlain: We learned to change the questions we ask vendors from “Does this product do this?” to “Show me how this product does this.” The tool guides us to ask the right questions and think about what we’re trying to accomplish with a product, so that we’re not saying “I want this math product,” but rather “I want a better way to assess my third graders on the skills that the data shows they were underperforming on.” It’s very empowering.
Mister: We need to think about the role of technology in school and how we evaluate whether a product enhances teaching and learning. We are at an important crossroads in understanding data privacy and online presence in a way that was not necessary before. It was different when kids were just playing Oregon Trail. Now there are more risks. We ourselves have been attacked by ransomware. So making data privacy part of the discussion about product evaluation is a necessity.
Pedro: The Teacher Ready Framework takes the emotions out of the conversation and bases it on data. One big success we’ve seen at KIPP DC is that we no longer base (product purchasing) decisions on how cool something looks. We now do effectiveness analysis. The tool really shows us what works and what’s worth spending time on in the classroom. It’s created a huge shift in the standards we apply to products.