MARK SHINN: Thank you. Now, let's see -- let's see if I can live up to at least a little bit of that, so good morning to you too. So I -- well, we will do a little bit of choral responding a little
bit later. I don't have an opportunity here in just a couple of hours to get to know who you are and what your backgrounds are, and what you want to get out of this session, but it will be really
important to me that what I do meets your needs and not mine. I have, you know, for two hours about a hundred and ten slides which is of course totally unreasonable. It should be usually shoot for
about 30 during that timeframe, and that's largely in part because I don't know who the audience is, okay? That I could sit on. I was asked to do some things two hours this morning on progress
monitoring on elementary level and the basic skills progress skills progress monitoring and then a couple of hours on -- two hours, I guess this afternoon on secondary basic skills progress
monitoring which is at least a little bit of kind of an oxymoron in some sense of basic skills secondary but still the needs of some kids is going to be clear and apparent and I worry that will stop
doing some of the right things. So let's make this interactive. Let me at least cover some of the ground rules. If you have a question, raise your hand and say, "Hey, Mark." So let's practice. Ready.
MARK SHINN: Okay. And we can go wherever you want, okay? What I've tried to do is divide this thing into like three or four parts, okay? One with little bit sort of basic background, two, with some
AUDIENCE MEMBER: Hey, Mark.
notions about how we can use progress -- simple progress monitoring data represented by curriculum based measurement which you might also be familiar with in Pennsylvania sometimes called CBA not
necessarily I would argue accurately represented maybe by a bunch of other commercial products including DIBELS sort of, AIMSweb sort of, easyCBM kind of. So I'll hit -- I hope a bunch of those
different kinds of things. I want to address not only how we can use this in a variety of different ways. I want to also address some of the common myths out there that get in the way of something
simple, scientific and sound in terms of progress monitoring. Most commonly, I know this kid who reads but doesn't understand which is of course, well, duh, that would -- that would characterize all
of us, can read some things and not understand it. You know, it's the coming tax season and I'm trying to read the things my accountant is sending me, and I have absolutely no clue what he's saying
even though I can read the words. I want to be able to address that and put that into the context of these things, these battles we're dealing with. I want to deal with the problem of the F word
because I want you to stop calling these things fluency measures because they are indeed not, and if you think of them as fluency, we will diminish our impact on making good decisions about kids with
data. I want to run through maybe how we make decisions in multi-tiers with respect to goal setting, okay? That is, how do we -- what do we think about with respect to kids at Tier I for third
graders in typical growth and development. I might skip over that section, okay? Depending upon who you are. How do you set goals for kids at Tier II? Well, reducing the gap. How do you set goals at
tier III? Reducing the gap, but I want to ensure that we have the opportunity to fix the most solvable problem we have in Special Education which is terrible goals and worst progress monitoring,
okay? So that's kind of like that last quarter so I may skip over the third quarter and maybe cycle back, okay? So if we could do that, you know, we'll be catching up to 1978. And you think I'm
joking. It's a thirty-five-year gap between what we know and what we do. And I still see professionally embarrassing IEP goals and I see absolutely dreadful Special Ed progress monitoring even now a
decade after RDI really put even more emphasis on progress monitoring and it seems like we've left Special Ed behind. Okay. So we covered sort of the ground rules. If we can make this interactive,
that would be great. I'm used to watching it on, but since -- or do you want me to video something or another [inaudible] did broadcast something -- I'll try to stay in a one reasonable place. So
here we are. Pennsylvania Special Ed Conference, I might be reasonably close. I know we're in Hershey because there's chocolate absolutely everywhere. So -- yeah. And it is February, right? Okay. So
if we take our first shot, an opportunity to work a little bit in Pennsylvania off and on for about 20 years. Some faces sort of look familiar. I like to come back here because you've probably seen
this one before and you folks tend to be the movers and checkers. So -- and that of course has many means here in Pennsylvania. If there are probably three or four states where I can go and have a
pretty sophisticated immense dialogue about Multi-tiered services, RtI, RtII, ISTs, da, da, da, da, all history, CBA, CBM [makes noise] it would be Pennsylvania, okay? Pennsylvania and has some
pretty good discussions in the home State of Illinois. Once in a while and I will do -- you know, I have had argued that they've sort of frozen in the '90s and then -- and actually interestingly
enough Alaska specially Anchorage. There are some pretty good discussions we can have. I was just there about 10 days ago. It was actually warmer in Anchorage than Chicago by about 40 degrees. So --
but here I'm expecting a lot of out of you, folks, okay? We've got some good training programs. We've got -- always have a very good state relationship and you've had tremendous innovation for more
than 30 years in terms of doing the right thing specially for kids that have such significant discrepancies that any required intensive interventions at Special Ed robotics, okay? So that's a high
standard for you, folks, okay? So please live up to it. I work for Progress Monitoring Day, okay? I've had an opportunity from about 1978 to be involved in research and practice and trying to provide
a reasonable, simple system for making decisions about, is this kid discrepant and is this kid progressing in the areas of basic skills? Okay. It's sort of what I do for a living, but I work with
schools of course and we don't like charts that don't show good results even if the kids are not making good results, okay? So this is along with that cartoons I've had it probably 30 years. I worked
it all with the old chart and get a new one with the wiggly line going diagonally in the other direction, okay? We all want data that makes kids look good whether they are improving or not. I work
with the school system and we were talking about progress monitoring and one of the members of the assessment committee said, "I want kids to show growth whether they're growing or not." Its like,
"Well, that's about you and not about the kid." You know, this [inaudible] -- I understand how difficult it is to produce results, okay? It's difficult to produce results. Everybody knows that. But
until we have an honest data system, it's even more difficult because then we can do what we want rather than what necessarily kids need. Now, I see these kinds of things everywhere in terms of
everybody wanting good data. This is where they might can be seen on -- I'll just move on here a little bit. Just like--I want to know sales aren't up, there you go. Get down. And we get to be
dancing at the end if you do this, all right? Let's see. Okay. So my area of expertise is simple measures of basic skill, okay? Captured by the phrase curriculum based measurement since probably late
1980s. And it has very little to do with your own curriculum. It sort of a misnomer to keep using the word CB in it, but then it's mostly a historical tradition and you've sit down to, how would you,
how could you, what would you name it instead, okay? So all variations of different sort of themes, almost all the people I know that worked in this field are in bred. You know, we know each other,
we hired each other, you chaired their communities. We all started at University at Minnesota one way or another and -- yeah. So I'm hoping that, you know, since I've -- I don't like crediting by
myself as to why there should be a third edition of this mainly in 18 months of -- I kind of has broke his -- kind of tangibly agreed to kind of do an update of the third decade of maybe what's going
on with basic skills, progress monitoring and screening so we'll see. That's my goal. I was one of the six members of the US Department of Ed offices especially in programs technical review panel
which meant that I reviewed things to see whether they met progress monitoring standards, you know, they do write about it, you know. So I do researches with real kids, real teachers, real staff, et
cetera. So I hope I don't look too much like a University professor because I do real things. Part of what else I have to do for you, give me 30 seconds for what I call full-disclosure. I believe
anybody is doing a presentation that has commercial entry should be as transparent as possible. I don't begrudge people making money but you should know that some people have commercial interest so
let me get 30 seconds. I serve as a Paid Consultant for Pearson Assessment. They brought my friend, Gary Grumman's company in 2006. They keep me on as a consultant, okay? And they asked me questions
every once in a while. I tried to help them with what I think is direct advice reflecting what I know from research and practice. I do serve as a consultant for Cambium, you'll know that I have no
?
intervention program. I do get a royalty check from McGraw Hill for their Jamestown Reading Navigator. They asked me to help them with their progress monitoring system. That should be a theme that
you should note. It covers dinner maybe with my wife and I when we go out so it's not a super amount of money and then I serve as a member of the National Advisory Board for the consortium on reading
excellence House in California playing projects in 43 states on helping schools do staff development, do move the dial for literacy and math. They are really good if they've ever record them, their
confidence which they held every year, but this year most of them do the really terrible economic times around the country in Temecula, California. This is the best single conference you could go to
for evidence based practices. They don't pay me to say that. They pay me for my advice on progress monitoring, okay? I'm done with that. I put together a bunch of things for you to read, okay? I
don't like producing paper. It seemed sort of silly and that most of you throw it away. They gave me a folder that was really nice but I don't -- I'll leave it here when I leave, okay? So it's really
easy to access other than the National Security is -- the agency which is probably tracking when you go to my website or not, that's a joke. I don't know what they do but you can click on that blue I
down there and there's the address. I hate having my name in it, but it's easier for people to find my email and the website are pretty similar. You go -- click on that blue icon, that's the
downloads for professional icon -- boom, right there. That will take you to a list of alphabetical folders, go down to the Ps that would be presentation and handouts. Click on that. Go down to the
Pattan Pennsylvania Conference 2014, okay? Got it? It's really simple. Alphabetized, okay? I ask one thing, personal use only, okay? So don't copy all the things and put them in a binder and sell
them like I discovered somebody was doing, that doesn't seem right. But there will be a set of -- sets of articles and book chapters, and copy of the presentation, the PowerPoint if you want that I
don't -- I don't copyright my stuff. If there are some things in the presentation you'd like to share in your local district, I don't really care, just don't change the content significantly that
makes me say something I wasn't trying to say. There's also -- it's also called the QuickTime movie version of the whole presentation which will include all the embedded video including lame jokes.
And that's there -- you can watch it on your phone, your iPad, any computer that has QuickTime. It is not voice over, okay? But, my guess is that's what this thing might actually end up doing. Plenty
of things to read, including a couple of key things that I've identified for you, okay? This one which is a chapter on -- I was trying to find a -- the longest book chapter title that could be
possibly compiled, and I think I did it, building a scientifically based data system for progress monitoring and universal screening [inaudible] yeah, yeah, yeah, that's really long. So I sort of
succeeded in the title of -- a title of a long title of a really long book. So this is there for your personal use and let me ask but not been very grouchy about me letting people read one chapter
from -- I think a really great book. This is probably -- if you don't have this book --I think a dollar and a half, I split it with Gil Locker who's just as famous, really famous Special Ed guy who's
retired from University of Oregon about every 10 years and asked my assistant to put together what we think is the best work of the best people doing the stuff from pre-K to high school on evidence
based interventions. And all the authors weren't for free, you know, and they asked charges too much. Don't quote me. Oh, I guess on my video. Books are really expensive these days and I wish they
were not so expensive. Okay. So overview, keeping with my theme of monkeys and charts going diagonally in the wrong direction. I'm going to guess many of you have done something like this. When I say
begin, start reading aloud from the top of the page, read across the page, tried reading towards [inaudible] if you don't know, tell us. You'd be sure to do your best. Read, okay? But you know how to
deal with administrations, scoring of this very simple and elegant reading measure. Now, there are other measures of math and spelling which I love in particular, writing, early literacy and early
numeracy. There's lots of curriculum based measures and among I'm going to use reading as my primary example but the concepts still apply to the other areas. I'm presuming I had a list a little
familiar or not. I'll catch up sort of to spend pretty quickly. And I'm going to take a risk here and I'm going to ask -- I'm going to try to look at a data from you. So if you have your cell phone
and you know how to text or you're hooked up to the internet and you want to do some things, I'm going to use a piece of stuff here called Poll Everywhere. Anybody done Poll Everywhere before? Okay.
So you're kind of familiar with it, if not, it's really cool. It's free, if it's under 45, people, I have a subscription for once a month so I can do as many people as I want but I kind of like to
show you -- let's go here. And I've got a couple of questions for you. Kind of give me a sense as to where you might be. Presto. Okay. Let me shrink this a little bit. Okay. My first question is --
let me shrink it some more. Is if you follow the instructions, okay? If you're hooked up to the internet, you can just simply go to pollev.com/Markshinn and stay there because I'm going to ask three
or four questions, okay? Three in a row here to begin with. If not, if you know how to text, then there's the number that you would text to and there's your response you would code in. If you think
your RtII Implementation in your district at the elementary level is deeply infringed, teachers are implementing, administrators are supportive. Now, if you don't think that you're that deep yet you
could say this is a Minnesota term where I grew up, well, it's more than just putzing their own. We've got some kind of good things in place, okay? So that's going to be sort of your not quite where
you want to be great but we're getting there. The second -- or the third choice is putzing, we get a little bit of this, little bit of that but, you know, better than nothing but or what about this
one? Widespread passivity if not resistance. Okay. So you got the numbers and I should be able to do my little live chart. Okay. Can still see the numbers -- excuse me for walking in front of the
screen. Okay. You guys are very good at this as I would expect from people in Pennsylvania, experienced users. So give me kind of a baseline for my -- for my second question. It's pretty easy to do
and then it gives me a sense as to where you folks might be at. Putzing, that's kind of interesting because coming to Pennsylvania where I already to expect most of you to be more than putzing. What
are the barriers? I want--we'll have a chance to maybe grill down on some of this stuff. Huh. Okay. Let's go to the next poll. Excuse me for walking in front of the screen. I'll just stand over here
this time. Okay. We'll come back to these numbers in a sec? Now, I want you to just think about reading and I want you to think about your elementary level implementation of RtII and I'd like you to
tell me about your progress monitoring, elementary level for reading. Is it everybody does their own thing? General programs do this, Title I does that, Tier II if we have [makes noise]. Everybody
does their own things, Special Ed does it, everybody is in it and what we call cowboys, okay? Or any kind of contracts or well, General Ed does their thing but everybody does that the same in General
Ed. Tier II, everybody -- it's different than what they do in General Ed but in Tier II, everybody is consistent, okay? Tier III, everybody is consistent but it's different than -- okay. And Special
Ed, everybody does it similarly but it's different than, okay. So similar within a program or a Tier but different Tier to Tier and then the bottom line is, "Hey, look we measure like a hospital
does." It doesn't matter whether you come in to the emergency room or you just have a regular health check up. They take your temperature the same very light. We don't use Kelvin over here,
Fahrenheit over there. Celsius over here and there, we're pretty consistent.
AUDIENCE MEMBER: So, the poll part wants you to refresh your next question...
MARK SHINN: Nope. This is -- this is about your progress monitoring is at seamless -- okay yet, so, if you go to the -- you got to go to the next if you're on the web you got to go to the next thing.
Yeah. I've never done the receiving end of doing the polls and only done on -- or the delivering end I've only done from this one. So, here we are at 2014. If we want to scale basic weighing scale or
basic mass scale or basic rate scale why -- why is it so difficult for us to sit down and say, "Kumbaya." Let's do something similarly. So, we don't have to do what it called start over. Okay. We
have regular day from Tier I and we decided the kid needs Tier III. Well, wait a minute. We get to start over again with the data set. That means it's sort of bizarre, isn't it? Doesn't seem credibly
inefficient if you got a good data system that measure something authentically, accurately, validly why would we keep starting over again? This is the picture of practice in way too many places and
this is way too rare. Okay. Now, let's go down a little bit farther. Now, you're going to hit close to home. What do your IEP goals look like? Okay. Now, we get the kids or determine they're eligible
for Special Ed. Do you see lots of IEP goals and end in 80% and teachers get to figure out how to monitor progress for those goals? Or, well, we have lots of IEP goals but 80% [makes noise] but we
list everybody has to monitor progress the same way. Or we like fewer IEP goals we might use curriculum base measurement that say in one year the student will read 85 words a minute correct with
three or few error from grade three material. And we standardized those progress monitoring practices by monitoring progress by having a kid read one passage from a third grade randomly sample
passage each week comparing that rate in progress to the expected rate of progress in the actual rate of progress. And, of course, your last answer is -- beats me because everyone are still on --
still in 2013 or 2014. I get answers like, "I just work in a school. I don't know." It was like, "How can you not know?" We get pretty close to baseline data I've been looking for. Okay. Then shoot
for about 30. Okay. So, where are our Special Ed Directors in the room? What are you waiting for? Write a memo. You don't take a vote, you write a memo. 1982 and I worked in Minneapolis schools and
my directors of Special Ed said, "We're no longer going to do things the old way the way that doesn't work, the way that doesn't improve soon now comes the way that needs a lot of paper, the way that
-- and people do not invest in how we write goals for kids, we will do it a new way." University of Minnesota was funded for five years to develop a system for writing IEP goals to monitor progress
because everybody knew this 80% stuff was utter nonsense. Okay. And, so, from now on every time the kids IEP expires you won't rewrite a goal base on a present level of performance determined using a
survey level assessment process and curriculum based measurement and you will monitor progress once a week towards that goal in providing -- in providing training and coaching to be able to do that.
Now, tell me how you implement RtII with our being response if you do not have a ready system of measuring response. How do you do that? Boy, you're going to be making it up every case, every time as
you go along. And if indeed we have any quality control which to me is what Special Education is. Quality control for kids who need intensive intervention, so, it can't be anything the adults want.
Remember it's a legal set of protections from us, why in the world are we not leading? Okay. That's going to be my take today, got it? And let me show you some examples and illustrations. Now that's
pretty strong language for administrators if you want a sample letter that you could use or modify to sent out to your staff. I'll be glad to send you one because it's basically a variation of the
letter that Dr. Eve Comer from Minneapolis schools wrote in 1982. Seventy-five Special Ed teachers across 40 elementary buildings actually its 40 K-8 buildings. A part of my job as the schools'
psychologist was to go out and coach and train people how to write goals and how to progress monitor. And interestingly enough those progress monitoring practices have step through thick and thin
because they make good sense to teachers. They don't take a lot of time and they are related to improve student achievement outcomes. Cool. Okay. So, there are -- our polls will come back to this
data perhaps in a little bit. And let's talk about some of our content. Okay. So, this is an illustration of curriculum based measurement. Billy read Fourth grade material, he's a Fourth grader.
[VIDEO PLAYING]
[VIDEO PLAYING]
you do your best when reading. Are there any questions?
TEACHER: Okay. Begin.
BILLY: No.
BILLY: It's a pretty good competition. A hard feeling I felt probably relieved. It was a best one at new school. After I read a few terms I was important to start reading out -- it out loud. I
followed the [inaudible] directions I give. First, I read the competition...
MARK SHINN: Very bad.
BILLY: ...out. The word felt trying to stand important report read to...
MARK SHINN: It's not good.
TEACHER: Impressive.
BILLY: Impressive just to her [inaudible]
MARK SHINN: That's not good.
BILLY: [inaudible]
BILLY: [inaudible]
MARK SHINN: Does Fourth grader read very well? Would he be the star pupil in your Fourth grade classroom? Ain't he be more like the student that has an IEP and is provided an intensive intervention?
In some communities, this student actually might be the best reader. I've worked in communities like that. In other places, this might be a student who is the lowest performing reader in Fourth grade
who has an IEP. These are basic sort of data that we've known for 35, 40 yeas. If you want to know how well a kid reads listen to him under standard conditions and it doesn't take a long time to get
a sense as to how well this could read. Billy's rate in 38 words a minute correct with 17 errors the kind of errors he makes are what we called meaning distortioners. In other words they're not even
close. They're not preserving meanings. Most of them engages in very little self correction. He's not reading with expression or prosody. There's a lot of things that we can see in a--what's called a
rich task according to the common core state standards. A lot of information is packed into this. We did this for 10 minutes. We did this for one minute you reach the same conclusion which is more
efficient. That's a rhetorical question. If you're going to teach and it takes you 10 minutes to draw out conclusion and -- or you're going to teach and it takes you one minute to make a conclusion
which one the teacher prefer? You'd prefer something that doesn't get in the way of teaching. So, one of the design characteristics in building this research-based system beginning in 1978 with
something that was easy to do and time efficient. So, big ideas. If we think about RTI or RtII or I'd prefer, actually, Multi-Tiered System of Supports mostly because RTI gets too associated with
just SLD identification, specific learning disabilities. Database decision-making and the hallmarks of the decisions we make, the two most important ones are going to be screening so we can do that
early intervention and of course, progress monitoring. Progress monitoring of appropriate frequency aligned with severity of need. Okay. And here is what's really important for us to recognize, that
frequent progress monitoring where we look at the data and say, "Is it working? Keep going. Is it not? Change." Is one of the most powerful tools in our toolbox. We'll see a little bit of a
meta-analysis done by John Hattie, that would sit down and say that this is the third most powerful thing that we know about what works with kids and the single most powerful teaching variable that
we can actually control and influence. Wow. Basic skills, frequent progress monitoring for kids that are significantly discrepant with or without an IEP is one of the most easy problems to solve in
education. It doesn't cost a lot of money. You don't have to go to graduate school for five years to learn how to do it and we still are lagging behind. I'm going to argue that, one way or the other,
Special Ed leads. If General Ed can't get their act together in what they do, if our tiered services can't get their act together in what they do, if we can't build our tiered services by having
English-learning programs and Title I Programs and reading specialists say Kumbaya and say we are Tier II providers, if a kid gets that IEP, we take care of our business. We don't wait for anybody
else. Okay. Critical vocabulary, it really bothers me when people throw terms very casually without thinking. So, I hear schools all the time use terms like benchmarking, benchmark assessment, okay?
And I say, "What do you mean when you do that?" And they say, "Well, you know, benchmark assessment." Or, "Well, what is that?" "Well, you know, screening." And I say, "Hmm." Benchmark assessment is
repeated assessment, okay? Repeated assessment, usually, three to four times a year that has two purposes, not just one, universal screening and universal progress monitoring, okay? We're assessing
all the kids to see who might be discrepant. We're assessing all the kids to ensure that they're progressing, that's universal. It bothers me when I see benchmarking of basic skills at high school,
you'll hear about that this afternoon. It makes no sense to me benchmarking high school students for basic skills. You might screen, which is different. Benchmarking is screening and progress
monitoring. We need to keep that straight. Universal screening is pretty simple, that means we're screening everyone. If you're trying to implement RtII and you're based on a teacher referral model,
I'm going to suggest, you're not doing RtII. Teacher referral models are inherently biased. Teacher referral models are inherently inefficient. If you wait for teachers to refer kids even after you
benchmarked and screened, why in the world would you screen? Say, we -- when we have kids that have discrepancies in vision, we don't wait for the kids to be referred for vision screening, do we? No,
we use those data to make decisions. Screening, testing all the kids, it helps us measure this key concept in RtII called a performance discrepancy. Is the kid sufficiently different or significantly
different from other kids? Now, that's different than individual screening. Individual screening is one kid at a time. I've got a kid who might be struggling in ninth grade in Science, I might
individually screen that student to rule out a basic skills discrepancy. But I'm not assessing all the kids, that is not cost-efficient in RtII at some point. Multiple gaining screening was to be
something that we'll talk a little bit more about at the secondary level this afternoon. Or you might use your existing data as a first pass to screen all the kids and follow up the kids that are
maybe discrepant with an individual screening. And then, frequent progress monitoring. I want to distinguish progress monitoring from frequent progress monitoring by measuring at least once a month,
okay? So, if I'm doing benchmark assessment which has universal screening and universal progress monitoring, that's not frequent progress monitoring, okay, because that's like three times a year,
it'll give you a rough idea if kids are growing or not. But it's not the level of frequency we may need if a kid is at risk, which I might monitor once a month or kids that are significantly
discrepant and/or in Special Education with IEPs, I would monitor those kids weekly. Okay. So, that's our vocabulary. I'm going to be trying to work on being consistent myself. If you catch me being
inconsistent, catch me because it's tough to do that. Now, if you haven't seen John Hattie's book called Visible Learning -- anybody seen it? It's kind of a neat book, okay? And he's got a couple of
nice videos that go along with it, you can find on the internet. Hattie has looked at -- and he must either have a lot of money, a lot of time, or both because he's looking over 800 meta-analyses and
distilled what kid factors, what teacher factors, what school factors, what instructional factors impact kids the most, really neat. And we don't want to read that because let's go right to how he
summarizes it. He puts it in a speedometer which I really like, okay? So, if you --what he has done is he has gone through all these 800 and some meta-analyses and identified which are the ones that
are not only benign but actually are bad for kids. There are some things that we do that make kids worse. Guess where our retention would fit. Retention is associated with the negative effect size.
It's not like neutral, it's like almost always not so good. Don't think of some kids that couldn't -- oh, okay, got it. There's a bunch of things that we do in these 800 and some meta-analyses that
you could just say are just because kids get older. I've got a five-year-old and he runs a lot better now at five than he does at four and it's not because we put him in an intensive intervention.
He's just older, got it? Now, here's what Hattie points out, there's a bunch of things that fall into the effects of teaching. Hattie's perspective is this, when a teacher says, "I tried that, it
worked for me." Teacher's probably right because just about anything you do works. Did you know hardly anything doesn't work? And think about it, how many of us have had not so good instruction learn
something. Think of our college career, think of graduate school, even better, okay? So, the difference for Hattie is not different from zero. What we're looking for are the effects that are much
above what we do as part of the teaching. And guess where frequent progress monitoring, formative evaluation, the kinds of things done associated with curriculum-based measurements, you see where
that stands? When you look at the 800 and some meta-analyses, overall rank, third, that's almost one standard deviation difference when teachers collect systematic data and use those data to adjust
instruction when it's not progressing. Now, that's not a trivial and unimportant effect size, that's a really big deal and if something like this happen in medicine, it would be all over the news.
People would be adopting it in two weeks. But in education, where we are so slow to adopt, we still lag behind. But here's the bottom line, when you look at those other variables, the two things that
produce even greater effect than frequent progress monitoring, I'm not going to tell you what they are, you could look it up in the book, they're not teaching variables. This is the number one
teaching variable, the thing that we can control to be able to implement. Cool. Now, here's the challenge, schools don't like doing something well. I swear. Administrators, in particular. And my
wife's an administrator, she goes to meetings and everybody wants what we call the Swiss Army Knife of Assessment. I want something that can do it all, okay? And not only does it have to do it all,
it has to do without anybody being upset about it. So, the latest sort of thing, and I love computers, you can tell I'm a techie, the latest thing is anything that can go in the computer and send the
kid to the back of the room, that's got to be a really good assessment. And although some of those of things might be okay, I always worry about the kid that has a history of poor performance in
school. Anybody know any kids like that? Quite a history of not particularly being interested, engaged, motivated, feeling successful, in other words, the kids don't even try. And how do we gauge
whether that kid is actually progressing under those conditions? This is the stuff that see -- we see now as part of the Common Core. We see it as part of the technological advances, etcetera, but
you know what, these things might be okay for screening, might be okay for screening but progress monitoring especially frequent progress monitoring is not of their strength. Okay. So, here are the
different choices of the things that you see out there and the different kinds of areas that we have variations of curriculum-based measurement for. If you have the handouts which are downloadable in
whatever format, those are also on my website and they're supposed to be in the format that people had as part of coming to the conference, right? I've highlighted some in red. Oops. The ones in red,
I think, are the ones that work best. The ones in blue, I don't like either much or don't like them at all. You'll notice, I don't really like Phoneme Segmentation. How many of you are doing PSF as
part of your K-1? I just want to know why.
AUDIENCE MEMBER: [inaudible]
MARK SHINN: So, more measures are better, right? No. We've known for 40 years, if you want to know if your kid is at risk, walk him into the door of kindergarten, have him name all the letters of the
alphabet. No? That's all he needs to do. Kid's at risk, intervene, kid's not at risk, provide instruction or regular instruction, I wouldn't want to use that term precariously. But adding PSF to that
doesn't matter. It doesn't add to the predictive powers, it doesn't explain much more. Nonsense words, same sort of thing. Letter sounds works equally well or better, it's easier to score and you
don't get the baggage. We've learned a lot about how to streamline early literacy and early numeracy assessment and in particular, progress monitoring. Kindergarten kids that are at risk, I'm going
to progress them on for using letter sounds. It grows well, it correlates 0.8, 0.9 with nonsense words, it's easier to score and if teachers teach the test, they're teaching letter sounds. Well,
they're nonsense words, which is really the bigger minds, isn't it? There are a bunch of different things that we could look at that really work well for frequent progress monitoring. And some
others, like, not so much. By the way, never repeat letter naming as a progress monitoring tool. That's actually a screening. I would not do letter naming for kindergarten kids in the winter. In
kindergarten, I wouldn't do letter naming in the spring for kindergarten. I wouldn't do letter naming in the fall for first grade. That's just -- that's just adding to the assessment burden that you
don't need to do. Now, it's become sort of culturalized and I would argue, cultish, when people don't use data until [inaudible] There are a bunch of different publishers of curriculum-based
measurement materials. A bunch of them, okay? They're all, as I said, inbred and you can trace the lineage not very difficult, really simple. Many of us went to the University of Minnesota. I didn't
understand either. So, there's Lynn Fuchs's work, okay? Jerry Tindal that -- we went to graduate school, Lynn and I -- Jerry and I, okay? There's easyCBM now owned by Riverside and something or
another. There's Yearly Progress Pro -- or that's there's Yearly Progress Pro and it's McGraw-Hill. There's -- I hired Roland Good and Ruth Kaminski was on this -- on her Doctoral Dissertation
Committee. She was on my leadership training grant. And then, Gary Germann who really started almost all of these stuff. I was a consultant for Gary as he built AIMSweb. So, we're all imbedded, you
know, inbred. Each one has a set of advantages and disadvantages and for you as a consumer, it's your job to know the strengths and weaknesses of each of these different potential applications. So,
one of the reasons why I like curriculum-based measurement is what I call the five S's, okay? Some things that other measures tend not to be able to do. And here is the one I like the most, okay?
This other stuff is important but what I like is I can actually see kid performance. When kids write a short story based on a story starter for three minutes, I can see what that kid wrote. I can see
if it's mechanically correct, semantically correct. I can see whether it makes sense, whether it's sequenced properly. I can see a lot of stuff. Like, reading is incredibly rich with the things that
I can see in a short sample. I like it because I can determine whether a kid is motivated or not. Now, 15 years ago, I was the court-appointed staff development person for the California Youth
Authority. California Youth Authority, if anybody knows CYA, these are kids who killed kids or kid -- kids who kill adults. These are not your sort of like, shoplifting kids. There were, you know,
like, 2,000 some kids in the State of California from 10 years old to age 21 in youth prisons. And I was the court-appointed person because only one and a half percent of those kids were identified
as special education illegible. That seems sort of island prison population, a little lower than what you might expect. And the quality of their services were low so they said you need to do a better
job of screening and you need to -- a better job of writing IEP goals and progress monitoring. So, we had hundreds and hundreds and hundreds of young murderers read aloud to us. How many kids said
no? None. If some of these kids, 17 years old, you look at their sheet, you'd go like, "I could ask this kid to do something?" I would have said, "Okay, fine." And not a single kid refused. That is
remarkable to me. Why did something like that happen? I like the authentic aspects of it. I also like the fact that it's simple to do and easy even for a non-literate parent to understand because I
want a kid to be able to understand the data. I'd like a parent to be able to understand the data. I'd like us all to be able to understand something. Sensitive to improvement, the fact that it
actually grows is good and, yeah, it's scientific. There is a strong body of knowledge, over 250 refereed journal articles in reading alone about how well this stuff works. And here's the other
advantage. I'm going to talk about being seamless. I'm going to show you an illustration that the same basic techniques we use for Billy at fourth grade work equally well for emerging reader in
kindergarten and work equally well for a deficient reader in high school. You don't have to learn a different technique for a kid who's in third grade than a kid who's little, a kid that's old, if
you're interested in reading and you want to be able to say is this kid different or is this kid progressing, we can do it in the same way. It's kind of neat. Let's take a look at a high school
student, eleventh grader, struggling in content area instruction. I'm going to give this kid an eighth grade passage. Let's see if this student is struggling with basic skills. Oh, you'll -- it gets
better until about the first five, six sentences.
[VIDEO BEGINS]
[VIDEO BEGINS]
questions?
STUDENT: No.
TEACHER: Okay. Begin.
STUDENT: After moving to a new town -- a new town, nine years old Samantha and her twelve-year-old brother Robert had heard of an old toboggan slide from some of the other neighborhood children. They
decided that they needed to check -- to check it out. Suddenly -- what is this word?
TEACHER: Supposedly.
STUDENT: Supposedly, it was on the north side of the peninsula in the middle school...
MARK SHINN: In the middle school.
STUDENT: ...on the lake behind their new home. Paddling lazily, they head across the lake in their canoe. Just as they had been told, there was a...
[VIDEO ENDS]
MARK SHINN: Good enough for eleventh grade? Good enough for eighth grade? Good enough for sixth grade? How's this kid going to do in that physics book? Not good. I can do this the same as way as I
did. Billy can score it the same way. This kid would be reading about the eighth percentile if they were compared to eighth graders. That's not good. Now, you're going to have a hard time listening
to this one on here, though, I think. But here's a kindergarten kid reading in April.
[VIDEO BEGINS]
TEACHER: When I say begin...
TEACHER: ...start reading aloud at the top of this page. Read across the page. Try to read each word. If you come to a word you don't know, I'll tell it to you. Be sure to do your best reading. Are
MARK SHINN: That's it.
there any questions?
TEACHER: Okay. When you're ready, begin.
STUDENT: Uh-uh.
STUDENT: Ants got a run and that's...
[VIDEO ENDS]
MARK SHINN: She's crying but she's reading. She's going to be reading about 40 some words in this correct with five errors. That's really pretty good for a kindergarten kid. So, what can I do with
these kinds of data? Okay. If I have a basic measurement system that everybody's good at, first grade teacher, seventh grade, Special Ed teacher, a school psychologist, a social worker, we all know
and understand these kinds of data, I could mount her progress seamlessly across three tiers. For kids that are in Tier III, I could mount her progress weekly. For kids that are in Tier II, I could
do weekly but the data would suggest I could also do it as little as once a month. So, we then can set a really nice study with at risk kids. It's a choice of your resources, how often you'd like to
do it. And you know, I could monitor all kids three times a year, my advice is through at least the first year of middle school in low-risk communities. I could assess everyone in a very simple data
system. For this, it'll cost me about 15 minutes of individual time a year. Five minutes, three passages, three times a year, fifteen minutes a year to get a commitment to be able to say, "Are all of
our kids growing?" And communicate that information to parents. This is sort of what it looks like. All the different publishers can have different colors in terms of their boxes. Almost all will
have that box. The average readers, in other words, scores in the twenty-fifth or the seventy-fifth percentile. Here is a kid that scored on that, you know, twenty-fifth percentile. They decided to
leave this kid alone because the teacher had some really good programs that they deliver as part of their Core. Here's this kid by midwinter that's reduced the gap and by the end of the year, reads
as well as other kids. This kid has benefited from the General Ed Reading Programs. This is what I like to do at Tier II, is repeat the benchmark every month. Here's a student Emma who is in a Tier
II Program. In this case, operationalized by Title I. Emma is, compared to all the other kids in Tier II, that's this disaggregated box, we measure all of our Tier II kids once a month. Emma is
growing, Tier II kids are growing. My concern is the Tier II kids aren't reducing the gap. This is not an Emma problem, this is the lack of our intensity of our Tier II. At Tier III, we now had a
more specific rate of improvement designed to reduce the gap. The overarching idea in Tier II and Tier III is reduce the gap. If your goal is not reduce the gap, it's not a good goal. Here's a
student that was clearly not reducing the gap, formative evaluation would say that we make a change in intervention and now this kid's rate of improvement is reducing the gap. And for kids with IEPs,
this is the rate of improvement that reduces the gap and there's the rate of improvement that I would call the serious kick butt rate of improvement. Why is this kid doing so well? Because this is a
well-trained Special Ed teacher with a very intensive, validated Special Ed intervention program and a validated mechanism of knowing whether it's paying off. This is what we can get with many, if
not most kids, who are receiving Special Ed. And it's driven by a data system. I like a seamless data system. See, I'd like to be able to monitor progress more frequently because you know what, my
graduates get freaked out and they say, "You mean you're picking interventions out of a hat?" And I say, "No, I'm never picking interventions out of a hat. I'm picking interventions based on whether
they work on average better than others." But the fact that something works on average for more kids than other programs or works well with many or most isn't ever a guarantee that it works for all.
That's why we rate goals and monitor progress. And here's a really good intervention that went nowhere. It was implemented with fidelity. It just didn't work for this kid. So, they made an
intervention change that like what it did to the horizons reading group. And did they get better results? Yeah, but not enough to reduce the gap. So, now, they started doing -- adding some elements
of corrective reading as well and reduce class size and, you know what, now they're getting the rate of improvement that is actually faster than the expected rate of progress. This, to me, is the
promise that Special Education is supposed to provide, Tier III and Special Ed. Now, what can I do with these kinds of data? You know, how many of you seen the IEPs according to Dr. Seuss? That's
sweet. Do you like these IEPs? I do not like these IEPs. I do not like them, Jeez Louise. We test, we check. We plan, we meet. But nothing ever seems complete. Would you, could you like the form? I
do not like the form. I see not page one, not two, not three. Another change. A brand new box. I think we've all lost our rocks. I found this in a Special Ed teacher's -- right above her desk. Okay.
As I'm walking in and saying like, "Wow, what is that?" And she said, "Oh, you should read it." And I said, "You're right. And guess what I'm here for?" So, that was the art, you know, my wife said I
was forfeiting my soul but I was -- for six months, I was the RtII consultant for a Chicago public school. Now, if you want to hear some really good stories, I mean, a lot of people who are really
well-intentioned, trying hard than some others, fairly strange things. Their IEPs were 16 pages long and nowhere in there was a place to describe the intervention a kid would receive. It's all about
accommodations and stuff like that. Do you have questions? I would say...
AUDIENCE MEMBER: We were talking about the last slide about the...
MARK SHINN: Okay. You want to go back to the last slide. The IEPs never seem complete.
AUDIENCE MEMBER: Oh, no, I know that by heart.
MARK SHINN: Oh, yeah. Great. Okay.
AUDIENCE MEMBER: You're talking about rate of improvement [inaudible] but how many would use that rate improvement? Is that just for all children in general? You got students with multiple
disabilities and processing speeds and other kinds of disabilities. How is that rate of improvement different from a student with a disability progress now who doesn't have a variation in those kinds
of areas when it comes to the psychological testing mark?
MARK SHINN: Yeah. So, that's a really long and important question about rates of improvement.
AUDIENCE MEMBER: Yeah.
MARK SHINN: Okay. And here's what I want you to do. I want you to forget about all that. Okay? Rates of improvement to me, the -- especially the normative rates of improvement. Here is the danger,
they're taking on the characteristics of what we call developmental growth. Rates of improvement are a function of the intensity of the intervention, not the kid, okay? Let's all think about that for
a minute. Rates of improvement are a function of the intensity of intervention, not a developmental characteristic, like, how fast your feet grow, okay? And my worry is by providing some general
ideas about how kids under nondescript academic conditions, okay, grow that these will take on a life of their own and be set as sort of like upper caps or limits, okay? It's kind of dangerous, isn't
AUDIENCE MEMBER: Yes.
it?
MARK SHINN: So, let me -- a study that I published with Mary Gleason and Jerry Tindal in 1987, 1988 ends with a summer school program, okay? Summer school, kids get qualified for summer school because
of their lack of improvement in their Special Ed Programs. Got a sense that these kids are probably pretty discrepant especially given that most schools now don't provide summer school programs. So,
kids get qualified for summer school because their lack of improvement during the school year who were taught by graduate students who were achieving rates of improvement of three to five words a
week. Oh. See, good instruction with carefully monitored for fidelity of implementation with really good programs can accomplish so much more than those normative growth rates, okay? Those normative
growth rates are pretty good for, like, okay, this kid's in third grade, doing as well as other kids, not really worried about him. Can they grow? Sure. About how much? Those are fun. But for kids
that are significantly discrepant, the goals -- and here's this interesting disconnect, having been in Special Ed for, like, I'm older than everybody. The goals were designed to drive the program we
deliver, okay? And it's -- what's turned out in the practice is that goals are over here, the program is completely different. So, by the thing and we will see a slide about it is we need to think
about the biggest loser, okay? Now, don't take that as Special Ed, loser, it's just a metaphor. But the biggest loser, if somebody weighs 450 pounds and a physician said that you really don't want to
lose much more than three quarters of a pound a week, if we took that growth rate and apply that to some of those significantly discrepant, that person would be a hundred years old before they get
healthy. And our intervention would be relatively weak. Anybody watched the Biggest Loser every saw it for trash TV, I can't help it. You look at that intervention program they put those people on?
It's unbelievably intense. And that's what's Special Education are supposed to be, okay? So those growth rates, and we'll see an example, to me, that's the minimum, bare minimum, bare, bare, bare,
bare, bare minimum I would ever consider for a kid who's receiving a Special Ed. Okay. Because otherwise I'm institutionalizing weak interventions, all right. Okay. We'll come back. All right. So
these are goals from a school in spring of 2013. There's the annual goal and I didn't just pick it because it was the worst. I picked it because it was representative of 17 school districts where my
practicum and interns were working. So they all need to bring in. This was pretty representative because this is exactly what I would've seen in my first job in St. Paul Schools in 1980. Okay. Here's
a global annual goal that is, you know, if a kid reads one more word, they're -- they can say well, in that goal and then look at this utter nonsense. A bunch of short term objectives that most of
the people in the room as you're talking about it, wont have any clue of what you're talking about. Will they be able--okay. Given a -- and you guys well, because you're Special Ed teachers, right?
But wow, they go -- and how many of you go to meetings people put on autopilot? It's the bubblehead doll, sure. Or you know what, talking about your goals because there's too many of them. Okay. So
this is kind of interesting because this is the real problem, this is bad, but this is worse. Okay. In the old days I used to make jokes about, you know, what, you want a kid that cross the street
successfully 80% of the time? And then I was working in Mississippi for the Department of Ed and the director specialist said, "That ain't funny." We're just move in kid. Okay. The move in kid cane
in with a goal, the kid will cross the street successfully 80% of the time. It's like, you know, I didn't write it but this thing came in that way, and then, so if it's something important that kids
need to do they not only need to do it accurately all of the time, they need to do it without thinking. So my all time worst--and by the way you should collect your all time silliest goal or evidence
of no thinking. A director of Special Ed sent me this move in kid again, kid will not bring a gun to school 80% of the time. So this problem--this problem was noted in 1978. Okay. For those youngins
everybody knew in Special Ed in 1978 that this dog wasn't going to hunt, it was based on a book by Mager and Pipe called Writing Behavioral Objectives, it said, look, kids can't be perfect, how
about--I don't know, 80%. Okay. Not bad maybe for the early 70's, kids can't be perfect but how about 80%? But absolutely nothing to do with what we know about learning and meaningful improvement. So
in 1978 congress funded the University of Minnesota to develop a technology to write [inaudible] and monitor progress and here we are, we went back to our [inaudible] and we'd say even in Special Ed
with the technology once developed we're not implementing. These are not--and you should know that short term objectives were no longer required in 2004 for two reasons, there's no evidence that
attainment of short-term objectives leads to meaningful student improvement, and the second point is they just make for really long IEPs that nobody invests in. So we can do something different,
these are crazy goals, and we can make it so much more straight forward. So how about this one, in one year, John will read a hundred and fifteen words a minute correctly with real few errors from a
grade four passage. This seems relatively straight forward, now, it's not easy to figure out why grade four is not easy to pick what we call the criteria for a successful performance, exceptional
performance or the CAP, it's a simple phrase but it requires thinking to get there. And there's a lot of behavior that goes into achieving this. Just like there's a lot of behavior that says that guy
who weighed 425 pounds at the end of treatment will weigh 225 pounds. Do you think there was a lot of other things that they measured due as part of that? Absolutely. Do you think there was a lot of
things that was important to do with fidelity? Absolutely. But the bottom line was at the end of the show they stand the guy on the scale. Oh, we can do this in other areas, Math and Writing, quite
easily, and these are the kinds--this is not the only example but good teachers with good interventions delivered with fidelity and the capacity to know what works can produce these kinds of
outcomes. Now, well, oh, man. So what else could we do with this, as we're thinking about building a data system for our IEPs? How many of you can go to your school and get regular graphic display
for your kids' performance? How many of you ever seen a video of your kid performing? How many of you seen your kid in a choir? On the soccer team? On the ice skating rink? In gymnastics? Playing at
the school concert? But you've never seen your kids perform in school. Isn't that weird? So I've been working with schools to develop--a school literacy portfolio. If you got PowerPoint you can't do
it. Put a picture of your school, come up with the fancy title, and take your cell phone, take your iPad, and capture one of those benchmark assessments that you do. Just one passage, now
they're--the files are so small, you can drag and drop it. Here's what a student was expected to read, drop in the video, drop in any of your CBM graphs that are all in jpeg now, just drag and drop,
and then have a conference with your parents and say, "You want to see how your kid did?" It's really cool. Well, that's once and it's the second time that gets him.
[VIDEO BEGINS]
TEACHER: [inaudible] you're such a good reader. So good, because I like to hear you. When I say "begin" start reading aloud at the top of the page and read across the page.
TEACHER: Try to read each word, and if you got into the word you don't know, tell me about it.
STUDENT: Okay.
MARK SHINN: Of course, we didn't have a professional video person doing this, her head would be not cut off.
TEACHER: Okay. Begin.
STUDENT: Ants on the rock. Ants are on the rock and the rock felt something. Ants stop by...
MARK SHINN: Bottom line, there's what the kid was, and there's what the kid is not. They could grow, the kid's cuter, I mean, she was cute the first time, she's grown a lot. How come we're not taking
[VIDEO ENDS]
advantage of technology to put together something where not only teachers can talk about kid performance, but students can take responsibility for talking about their own growth. The things we do for
progress monitoring to me in 2014 have capacity not just for IEPs, not for Tier II, not Tier III but for improving school achievement for everyone. And getting everyone invested in improving reading.
In 1982 here's how we would conduct an IEP annual or three-year evaluation. We're going to take a look at the student's current IEP goal and answer this question, is this student--if this black line,
is a black line that represents reducing the gap, is this student read--meeting or exceeding the rate of improvement that reduces the gap. This student is exceeding the expected rate of improvement
that reduces the gap. Is this student reducing the discrepancy from peers? In fact this student is no longer discrepant. In fact if, working in the world this person would've been considered for
reduction and services here. But now there's absolutely no basis for saying this kid is significantly discrepant such as Special Education has warranted. I would now suggest that we develop a
transition plan. We've just completed our IEP. We've completed our annual review. The notion of having to go out and collect additional data is not a good use of our time. If we're going to be
collecting these kinds of data and we're going to be making a Special Ed eligibility decision, under SLD, don't you need to have some progress in monitoring data to be able to do that? Isn't that
great now kind of a problem because the data we collect is not institutionalized or commonplace?
AUDIENCE MEMBER: On the previous screen when you were making the determination where you tend to no longer be need Special Education.
MARK SHINN: Uh-hmm.
AUDIENCE MEMBER: What percentage are you using as that goal, to get that and more?
MARK SHINN: This one, right? How we set that rate of improvement?
AUDIENCE MEMBER: Right. I mean, you're saying 25 to 75% [inaudible] be the range.
MARK SHINN: Oh, this is the normative performance, this is what we call the performance discrepancy, and this is what we call the progress discrepancy. Okay. So the performance and--let me--let me do
the next slide. Okay. Because here's one of the really--so, Illinois is an RTI only state in terms of SLD Identification. You can do ability achievement discrepancy but it doesn't matter. Okay.
Whether a kid is eligible under ability achieving discrepancy, if they're not eligible under RTI they're not eligible. Okay. So you have to have these kinds of data and guess what we're still doing
in Illinois, ability achievement discrepancy. It's really hard to--even though it's codified in 2006 in Illinois, it's hard for people to give stuff up. Mostly because they've not learned the basics
about how to determine eligibility, duh. Here's what it looks like, this is adopted from Lynnfield's presentation in 2003 that a kid has to be discrepant from peers and their achievement. This is
called the progress--the performance discrepancy. It measures the severity of educational need. A kid can be discrepant, okay, but not eligible. Okay. So it's necessary but not sufficient. So this is
a kid, if this is average and this is the below average and below that line is in the bottom 10 percentile, in Illinois this kid could be significantly discrepant, necessary but not sufficient. If
this kid is progressing, the rate of improvement reduces the gap even thought they're discrepant they're not eligible. This is called the progress discrepancy. We need to measure both to determine
eligibility. Measuring the performance discrepancy is relatively easy, we're used to doing that. But what we're not good at is the progress discrepancy. And because we're not good at the progress
discrepancy people don't want to do RTI Special Ed Eligibility. And here's that dilemma, chicken and egg, chicken and egg, which comes first, do you get really good in eligibility determination
before you get good at progress--no, you got to get good in progress in monitoring before you get good at eligibility. So this is a kid that would likely not be eligible because the performance--the
performance discrepancy discrepant, look, the kid's rate of improvement now is reducing the gap, this kid should continue to receive a Tier III Intervention.
AUDIENCE MEMBER: [inaudible]
MARK SHINN: No, all of these. Yeah. Yes?
AUDIENCE MEMBER: Okay. In the slide where the student was close--had close to the gap.
MARK SHINN: Uh-hmm.
AUDIENCE MEMBER: I mean, is it possible that he closed [inaudible] closed the gap because of the intervention?
MARK SHINN: Absolutely.
AUDIENCE MEMBER: That's removed...
MARK SHINN: Yup.
AUDIENCE MEMBER: [inaudible] reappears?
MARK SHINN: Absolutely. That's why you develop a?
AUDIENCE MEMBER: Transition plan.
MARK SHINN: Transition plan. So in--let's see, you can find some stuff on this, it's--I had a three-year federally funded grant to study how we might identify and transition kids from programs. It was
called the Responsible Reintegration Project, you can find the chapter on Reintegrating Kids And Best Practices V, okay, done by Kelly Paul Smith who was one of my research assistants on that
project. So we know how to do transition plans. You maintain the same progress monitoring system that you had as a part of the IEP.
AUDIENCE MEMBER: And will that student still be considered a student with a disability?
MARK SHINN: Could.
AUDIENCE MEMBER: So you can kind of work it either way?
AUDIENCE MEMBER: Could?
MARK SHINN: Could.
MARK SHINN: Yeah, the whole notion though is--and here's where it drives me crazy that we feel we have to like abandon kids. And no, good transition plan would maintain the same progress monitoring
process, and so as part of our grant we insure that we collected the same rate of progress monitoring data and met monthly for the first three months. And how many kids had we did return it back into
the--a more pull out model for services? None. Almost--I think we have two kids that we needed to make some modifications in our instructional programs it was delivered in General Ed but most of the
kids maintain that growth, you know, the myth that the kids will regress is important to consider. But that's so easily counted by progress monitoring data. Okay. All right. So here's this goal
discrepancy stuff, it's really important for us. Now, we have the capacity to say is this kid not progressive. And here again, let's go back to this dilemma, how do we do this now as part of our
title? It bothers me when we say we're using end of the unit test, or it bothers me when we're using some other method to make this judgment, I want to make this with integrity if I'm making
eligibility decisions. But to me it starts by the quality of progress monitoring that occurs after a kid is eligible. If we're not using good practices after a kid is eligible, why in the world would
anybody adopt better practices to determine a kid being eligible? It's just not going to happen, it'll be way to slow and people will very quickly backtrack and say old system for eligibility which
was psychometrically flawed and morally wrong. Okay. That's pretty strong language. Okay. Give me 10 minutes here. Various implementation. It's not about science, gang. It's not about the evidence
about what these tools measure, it's about people telling stories and about our real life rational experiences that are real barriers. But science isn't going to convince anybody. If it did this
would've been long, long, long ago. Number one, is the perception of this thing as what we do as measuring fluency. Jay Samuels, one of the original architects of the Theory of Automaticity with
David LaBerge in 1974 referred to this process as automaticity not fluency, that's a really big deal. And here's the element, the problem of comprehension. So let me do fast cycle. Okay. Well, the
logical problem and here is the challenge for me in dealing with this problem for 35 years now. It's the smart teachers that push back. The smart teachers, I don't want the smart teachers pushing
back. I don't mind if somebody doesn't have a lot of training, a lot of experience, but it will be Carol Rye, you know, one of the best teachers I've ever seen, first time she started doing this
stuff, "I have a real problem with this Mark, you know, reading is more than just how fast you read." And I'm like, "Yeah, you're right, Carol. You're absolutely right." "No, Mark, really, seriously.
All this does is to encourage the teacher to teach kids to read fast. And that's not good reading." I'm going "You're right, Carol, you're right, Carol." And I've encountered this time and time and
time again. So we've got to understand, this is--this misperception because something as time that's a fluency measure. Ay-ay-ay, it doesn't have to be that way. Misinterpreting the natural reading
panels five areas of reading. There's five areas of reading, right? And of course fluency is one of them. And they're all separate, right? Oh, there's a mistake. And then here's the other problem
that some of us including me use the word fluency in our rating. So if we use the word fluency, guess what, I wouldn't be able to use it, too. So let's backtrack. Short test equal fluency, no. They
were determined to be short, so teachers didn't loose time for instruction. And they needed to be in timed, so they can be standardized. How many remember informal reading inventories? Did you know
all the stuff evolved out of informal reading inventories? Instead of fixing passage length of a hundred words, and for those who are old enough to remember how long a kid who couldn't read would
pick to read a hundred word passage, and then have read another one, oh, they were really excited to do that. Instead of fixing the length of the passage we experimented fixing the length of time.
How about if they read ten minutes, how about if they read seven minutes, how about if they read five minutes, how about if they read three minutes, how about they read one minute, how about if they
read thirty seconds? And they all worked equally well. People really didn't like 30 seconds. They really didn't like it. Okay. Thirty--a minute seemed okay but scientifically they're all about same.
So we've got to get over this, short so you can teach, that's important. You've seen Bill, we're done with Bill. Here's the thing, here's the graphic organizer for the natural reading panel, do you
see separate boxes? No, these are highly correlated skills, highly correlated. If a kid has pretty good vocabulary, you know what, they're usually a pretty good reader, they usually understand what
they read, they usually have automaticity with the [inaudible] kids who don't have one usually don't have a bunch of things. When we assess, we're assessing at this point when kids read aloud.
Reading [inaudible] oral reading test. How many have used maze? Maze is a silent reading test, it is no more a measure of comprehension than oral reading. In fact oral reading correlates higher to
other measures of comprehension than maze. Oral reading is oral reading, silent reading is silent reading. Now, here's what drove me crazy. People took these five berries and put them into isolated,
right? And they came up with test for these isolated boxes. And then you develop profiles for kids based on these boxes, and then teachers would teach based on the boxes. And so in some schools
they're going "Here's our fluency group. Here's our comprehension group. Here's our decoding group. Here's our vocabulary station." They even had stations. These what will be what we called band-aid
programs. And they don't work. Shallow implementation. Here's what you'll get, if you were to interpret this as a measure of fluency, your teachers will interpret it as the goal of reading is to read
fast. Do you want your teachers to do that? No. And they'll be your smart teachers that push back. If you'd treat it--treat it as an oral reading measure of generating skill, you can say, "Look, our
goal is to gain kids to read well." If you treat it as fluency your interventions will be speed and you know what the number one intervention will be, right? Repeated reading. If a kid can't read why
would you expect him to read faster? Don't you think they should learn how to read better? Repeated reading for kids who don't read only encourages things like guess and go, skipping, or memorization
of the passage. That's the only way you can get better and that's not meaningful. If you treat it as what it's supposed to be your good teachers will hear you say that you value reading quality. Your
interventions will be what we called band-aid programs and band-aid programs work well for little cuts. Hey, repeated reading for a kid who's a pretty good reader but struggles with pace, that's a
good intervention. But repeated reading for a kid who can't read as your primary tool in your tool box, ain't going to work. So, what we want to do is, if we got kids that are significantly
discrepant we want to use bandage programs. And the difference between a band-aid and the bandage is, it covers more. Your interventions for kids that are significantly discrepant should present an
integrated evidence-based program that cover lots of things. Well, so what do we have to say at the elementary level? Let's take a look. This is on repeated reading. So, this is by David Chard and
now the Dean of Southern Methodist University. Some really heavy hitters, Scott Baker, one of my former doctoral students, our results indicated repeated reading does not qualify as an evidence-based
or promising practice for students with or at risk for learning disabilities. That kind of like, wow. A status of the evidence from exceptional children. We need to take a look at it. Okay. So,
problem of comprehension. How many of you would say, I know this kid do? I know what this kid did, reads fluently but doesn't understand. Anybody know a kid like that? Don't we all? Of course, we do.
You -- put a white paper together for you, you can take a look at the stuff that I'm going to cover very quickly in the next few minutes because I want to be able to go and skip over the middle part,
go to writing goals for kids for IEP's. And that's means about much stuff, you know, you have to read on your -- okay. So, the Science part, there's the Science part, there's one. We're skipping. So,
when Scott Baker, I mean, actually Chad Hamilton did his doctoral dissertation, he went out to a bunch of schools and put into every teachers mailbox, a little card, he didn't want to used email, a
little card that said, "Do you have a kid that you think reads fluently but has difficult to understand the text?" And he didn't go back and bug people but we didn't want to be seen as aversive but
almost half of the teachers reported back, okay? And if they report, we basically asked them if they could identify a kid who read about the same level of fluency but understood what they read. Okay?
We call those similarly fluent peers, so every teacher had to name that kid, okay? And they had to name another kid that read the same but had comprehension, okay? And guess what we discovered, we
discovered that teachers were partially right, the kids, we ask teachers to make predictions about how well they read and how well they would do on a comprehension set of test, tests. We discovered
that kids that were actually what we called word callers performed much less well than the -- than the kids that the teacher identified as [inaudible] peer, teachers were half right but they were
totally off on whether kids were good readers. What they called fluent readers, there's where the kids actually read -- they read about a standard deviation lower in oral reading, than the kids that
they said -- remember, these lines that we predicted to read the same, that's what the teachers actually predicted and those differences were huge. So, Science part, teachers were half right. I'm
going to give you a comprehension test. Many of you will do better on this test now than you did before December 1st. Mostly because this has had a little bit sensitivity in the news media, okay? But
I want you to read this sentence. I have yet to encounter an educated who can't read these words. Got it? Now, this -- oops, skip over, my layering went goofy, so this one first. Who's A-Rod?
MARK SHINN: Okay. How many of you don't know that? It's okay. See. You're going to get different answers. You're going to get some people say, oh, I've heard that name, he's on the news. Well, he's
AUDIENCE MEMBER: Alex Rodriguez.
the guy that was banned from baseball for using steroids. You'll hear somebody in the room going, Alex Rodriguez, he plays for the Yankees and not really very nice guy, he dated Madonna, and he was
playing for this and you'll have some people go on and on and on about that. Where does it say all that? It doesn't? Okay. Who -- what does six mean? Anybody not know what six means?
AUDIENCE MEMBER: Third base, shortstop.
MARK SHINN: Okay. So, it's a shortstop. Okay. But people who know baseball know what that means but they didn't get that from reading the text. And you get into to here, why would people from Beantown
celebrate this? Could anybody answer this question for me?
AUDIENCE MEMBER: They hate the Yankees.
MARK SHINN: Okay. Beantown is Boston and so what?
AUDIENCE MEMBER: Division rivals.
MARK SHINN: Division rivals, for who?
AUDIENCE MEMBER: Yankees.
MARK SHINN: And so why does that matter? Where does it say Yankees up here?
AUDIENCE MEMBER: A-Rod [inaudible]
MARK SHINN: Yeah. Because Alex Rodriguez plays for the Yankees, the Yankees and Boston, Beantown is a nickname for -- there's a ton of stuff that's not on the page. Can you read this and not
understand some things? Of course. And then here's another one. Why would this event mean the same -- mean something different in October than it might in June? Because playoffs, World Series, all
that stuff, none of that is on the paper. So, here is -- if you don't get anything from here today, let's look at something that Joe Torgesen presented a few years ago that puts so much of this stuff
in perspective for me. Why we have problems with the measurement system because of this belief of kids who can read fluently but don't understand. We have this incredibly simplistic idea of what
understanding is. It's really naive. It's really dumb. Let's take a look what Torgesen -- and I was on the board with Torgesen for five or six years and it was a full day meeting and at the end of
the day everybody waited for Joe and Mark to argue about the F word. Okay. And he's a really famous guy and an unbelievable scientist. And the nicest person you could ever imagine. He's really
tremendous. But I put an asterisk by his presentation. Okay. Everything else is the same. So, what he says and what Dino and Lin Bukes and I would probably say is general readings skill don't call
fluency but we agree on what goes into it. Okay? And some of the things are reading with expression, automaticity, okay, accuracy, decoding especially multi-syllabic words, and for young kids only to
make awareness, got it? Any speech or language, people, or norm? That's what you'd been saying for 30 years. Not you, because you're not that old. Language really matters, that language is huge. So
this thing, you know, all of these things, the two probably biggest ones are going to be vocabulary, that's the killer, that's the killer, vocabulary, vocabulary, vocabulary and you acquire
vocabulary largely by what two sources? Okay. Number one is what you come in with, okay? And the second is reading volume. Do kids who read poorly read a lot? No. So, you acquire of -- academic
English is typically defined as words above the fifth or sixth grade level largely from how widely you read. Kids who don't read well don't read widely. Okay? So -- and here is the other one,
cultural influences, these are idioms or figures of speech, how -- what is the most ambiguous word in English language, at least last year was determined to be the most ambiguous word in English
language? The word run. How many different meanings have they identified for the word run? It's a three-letter word. Everyone can read it. Fifty-seven, fifty-eight? I'm going to run for office. I'm
going to run away from the problem. I'm going to go out for a run. There are billion that makes -- pinnacle, it's a -- wow. You can score a run in baseball. See. Give me a break. Cut it out now. Oh,
come on or come on. All of those things have very different meanings and print and oral language is filled with it. Okay. Guess what else? You need to know something about what you're reading. You
need to have life experience. You need to have a bunch of stuff. A lot of the kids when you say have good comprehension just has good life experience. They didn't get that from reading the text. How
many of you when you had to do something like read a workbook or a read a paragraph and answer questions, went right to the questions? Come on, be honest. You could do it without reading. That's what
we kind of confuse. Now, here is the other part. See, the last area and see what's at the top of the list. What makes you think these kids give a rip, that care, that try, that even bother? This is
the stuff that sort of matters. And Torgesen wraps this stuff up by saying that "For some kids learning to read would be the hardest thing they do in their whole life." Some kids struggle with it, we
deal with these kids everyday. But if you look at all the stuff on this slide, this is the easiest to teach. I didn't say easy but it's far easier to teach this than doing everything else. Here's
what else Torgesen will say "The longer it takes to get this job done, the bigger deficit you're going to have here, the bigger deficit you're going to have here, and the least motivated kid you're
going to have here." All right? Makes sense? Now, all of this stuff, of all the stuff that we want to measure, guess what's the easiest to measure? General reading. And here we are 35 years, lagging
behind in our capacity to assess general reading skills in a simple and scientifically sound way, especially for kids that have IEPs. Shame on us. Can we do it better now? Okay. So, here's -- if you
-- the third thing if I could leave you with something here today, don't use effort, think about the comprehensions, something more deep than that, believe what this mnemonic, that the goal of our
reading instructions to teach kids, to teach them to read early well and wide or widely. And the kids we work with that have IEPs, with goals in reading, typically didn't learn to read early. They've
never learned to read well, and why would we have expected them to read widely? Our job is to reduce that gap and that comes from frequent progress monitoring. Now, if you want some references on
some of this stuff, here's a really great book, teeny little print, it's in your handouts. I wish Phyllis would not have used the word Helping Children or Students of Color Become Successful Readers
because I would argue the content is applicable to all kids, especially poor children. If you haven't read Cunningham and Stanovich's classic article What Reading Does for the Mind, take a look at
that. And then the chapter that I put in your folder for you by Carolyn Denton and Sharon Vaughn on Preventive and Remediating Reading Difficulties,. Okay? All right. So, let's expand. Let's peek,
skip to individualized goals. Okay. So, how do we do this? Look, everybody hates what we do now. How many of you that have people, like, go on strike because they love our IEP process? I don't think
so. Everybody hates it. We know that if we change how we write goals we'll get better progress monitoring. And we know if we do that we can get better student achievement. We're going back to 35, 36
years ago, people made this federal effort to approve what we do. And here we are, and special ed has to lead. So let's go. So, here's the history, you can read it. These are real life goals, stop
it. If you haven't read this book, this was published by Council for Exceptional Children in 1977. This is for my mentor, Stan Deno, with the University of Minnesota. I had the pleasure of having
opportunity to study with Stan Deno and Jim Ysseldyke, what a great education I got. You'll find that some of the language in this manual is dated but it still rings true today. It's the basis --
notice -- wow, there is a bunch of graphs, wow. We know that the US Department of Ed for more than 12 years has been putting special effort into getting us to do a better job for writing goals and
progress monitoring, this is OSEP, Office of Special Ed Programs, US Department of Ed, now rolled into the National Center for RTI, now rolled into the Center for Intensive Intervention, all doing
some of the same things. Here is the fundamental notion that we need to have, okay? Obviously to do this right, we need to know present level performance. We need to be able to set a timeframe, a
level of curriculum and what we call the CAP, and the most challenging one will actually be how do we define adequate performance because people will go back into that rate of improvement to do that.
The math is simple but no thinking is required. And to write a goal for a kid, you need to think about what you want to accomplish and the program that's going to be used to deliver it. So, how many
of you have ever done the Survey Level of Assessment? Not seen any hands go up. There's one. Okay. So here's this kind -- Survey Level of Assessment is a informal reading inventory done using
curriculum-based measurement, okay? And here's what it looks like. This is a real kid. This is Ginny, okay? Ginny is a move-in kid into a middle school, sixth grade. This is last year's benchmark
data or let's say it's last year's normative data nationally, pick your publisher, but the colors of the chart will be a little different. There's performance of typical sixth graders, fall, winter,
spring, and there is Ginny. Ginny read about, you know, 20 some words a minute correct. Ginny in sixth grade is expected to be reading like other kids. If the bottom of each of these lines is the
tenth percentile, she's significantly discrepant. She's well below the tenth percentile, okay? This would be -- will be called a performance discrepancy, significantly discrepant, but how discrepant?
What can she read? Almost all of our assessment has been driven by what kids can't do. Now, to figure out how a -- how we write a goal, we need to start from where they are. So, there's the average.
She's reading 24. Let's give her three passages from fifth grade. Take me about five minutes to do that, there's her performance. She read about 30. How does she read compared to fifth graders? Ouch.
That's pretty bad. How does she read compared to fourth graders? That's pretty bad. How does she read compared to third graders? If she were in third grade in the school where I was supporting RTII,
she'd be a good candidate for tier 2 intervention, if she were in third grade. She's in sixth grade. You get a sense of how severe this discrepancy is? Makes sense? Ginny reads grade two material
about as well as the beginning of the [inaudible]. Now, I'm not going to get super numerical. I'm going to say she's about three or four years behind where we would expect in terms of regenerating
skill. This is her present expected level performance. There's her present level of performance. Okay? And our IEP team meaning, we need to first identify in one year where in the -- in these levels
of performance would we like to see Ginny be successful. Would we like her to be successful in grade three one year from now? Would we like her to be successful in grade four one year from now? Would
we like her to be successful in grade five one year from now? Would we like her to be successful in grade six one year from now? Would we like her to be successful in grade seven one year from now?
Every goal we write should be written with an understanding that every goal should have the disadvantages of that goal and the advantages of that goal specified so we can discuss it. Do you guys talk
like this at your IEP meetings? So the advantage of this goal would be the [makes noise], the disadvantage of this goal would be the [makes noise]. Oh, no. It's just mindless [makes noise]. And
that's not what the IEP was intended to be. The IEP -- the IEP was intended to drive the program that met the students' need, not to fill out papers. So, I want this student to be successful in grade
three in one year. What's the advantage of that though?
AUDIENCE MEMBER: Doable.
MARK SHINN: Oh, doable. Who does that benefit? Me. Oh, wow. Probably doable, accomplishable, or what's simply called feasible. But that's not about the kids. That's about me. What's the disadvantage
of that?
AUDIENCE MEMBER: [inaudible]
MARK SHINN: It's the gap is not reduced and next student is probably not only not reducing the gap but probably as or more discouraged. Right? Now, reducing the gap compares. So, I believe it's a
Carter case, suggested that one month growth for one month of special aid intervention was considered trivial and non-important, because that's the same expectation that we would have for a kid
without special. So, let me throw out something else. How about if we have the student being successful in grade four? What's the advantage of that goal?
AUDIENCE MEMBER: [inaudible]
MARK SHINN: It closes the gap, what's -- in this one disadvantage of that goal.
AUDIENCE MEMBER: [inaudible]
MARK SHINN: Pardon?
AUDIENCE MEMBER: It makes the goal long term.
MARK SHINN: Okay. So, it reduces the gap but the student will still likely need an intensive special education program one year from now. Right? Wouldn't you think? So the kids -- okay, now they're
three years behind instead of four. I mean that is fair amount of growth. That is nothing to trivialize but the student might still need -- and is there anything wrong with that? I should add.
Nothing wrong with a kid needing special education a year from now. Okay. Now, I'm going to throw out -- let's say you want the student to be successful in grade five. What's the advantage of that
goal? Close to the gap, less likely need an intensive special ed program but still probably likely. What's the disadvantage of that goal? Now, think.
AUDIENCE MEMBER: [inaudible]
MARK SHINN: But whose problem is that? That's about the adults and not the kid.
AUDIENCE MEMBER: It can be some of the [inaudible]
MARK SHINN: Maybe. But I'm going to argue with that again, it's an adult issue. What's the real potential disadvantage?
AUDIENCE MEMBER: It's going to require potentially level of intensity and time...
MARK SHINN: Excellent.
AUDIENCE MEMBER: ...that would take away from these options.
MARK SHINN: Oh, yeah. So, it could be.
AUDIENCE MEMBER: Well, that's how [inaudible]
MARK SHINN: That's exactly right. So the disadvantage is it's going to take a more intensive program and potentially more sacrifice, is the concept there. And guess what the team is supposed to
convene to do. Say, why you wanted the parent there from talking about why the kid might have to sacrifice. Well, because they should have a voice. It's the only -- we've turned this IEP process into
something that's just not even going through the motions. It's just nonsense, because I don't know what the truth is. See, I don't know what program we can deliver, what sacrifices we can make.
That's why I need you there and you there because how could Mark know what the truth is with this kid. What is the balance that we need to have between intensity of program and all the other stuff?
That's why they convene the multidisciplinary team including the parent. And we turn it -- let's search for IEPs and how people study them, you're not going to find very much research, okay? So, of
the few things that I found, one of them is by Smith, like 1991, that said something like "A third of the time the IEPs are never discussed as a team. But a third of the time they went after the
meeting and somebody just passes and run for a signature and only a third of a time where they're even given token discussion at the meeting." Now, wouldn't you want to know -- wouldn't you want to
know your team was supposed to accomplish? Now, excuse me if I need care a little bit but Monday was not a really bad day. My son has had 16 years of these things called -- 12 years of desmoid
fibroids, they're desmoid tumors, they're just nasty. You know, that is really nasty and two years ago it had -- after how many 16, 17 surgeries, he had chemotherapy for a year and then another
surgery and then radiation. So we thought we were, like, done on Monday. So they discovered he's got lemon-size tumor on this side. The next meeting we have would be by the school, okay? The goal
will be do we want to shrink this tumor, do we want to excise this tumor, okay? One treatment will mean that if they'll be able to find something that they know will be in control of these things if
they come back. The other one is if they cut it out, that means that'll probably grow back in a different location. But the goal will be control or cure, you know, right, okay? But that's the
overarching thing that we're going to discussing and as part of our IEPs, those are sort of the same levels of discussion we're supposed to have. What's our goal here? If we do this, we think we can
eliminate the discrepancy? What's the goal here? We're thinking reduce the discrepancy. If we're going to eliminate the discrepancy or reduce it even more, it's going to require a more intensive
program, and as Diane said, potentially other sacrifices to attain it. But that's not what we're doing, okay? And that's sort of our fault. There is no truth. There is a falsehood. This would not be
acceptable, having a kid make a year's growth in a year's time. Now, can't you all know some kids with severe disabilities that have chronic lung standing multiple problems that don't speak, don't do
[makes noise] that getting a year's growth and a year's time would be pretty amazing, right? We all know cases like that but that's not the vast majority of kids that are receiving special ed. So we
don't design the process for the exception, okay? Because I actually want us to challenge ourselves. There's clear day that the kids with severe kind of disabilities can learn to read at the second
or third grade level, okay? There's clear day that with intensive instruction, many or most of those kids can get some functional level of reading. It's really intensive and it takes a long time. So,
they don't want to automatically say "Because [makes noise] we can't expect much." But by and large, we need to be challenged by our goals so we deliver programs of appropriate intensive. Now, I
don't know about you, but I was in the middle school two years ago, seventh grade, working with all the special ed teachers and there are three of them, big middle school, said a typical kid --
what's the duration of their special ed intervention? Thirty-eight minutes. Thirty-eight minutes? Thirty-eight minutes? Okay, so that's the lowest amount. Oh, no, all the kids that have an IEP goal
of reading special ed, have a [indistinct] of 38 minutes. They said "But not all of them." They said not all of them. They said kids with discrepancies like this, what do they get? Thirty-eight
minutes. Kids with discrepancies like this, what do they get? You know what, I'm going to argue that. If I were an outsider, I'd say that's sort of prima facie evidence that you don't have
individualized education aligned with the students' unique needs, because everybody is getting the same. See, the interventions are supposed to be driven by what we want to accomplish, and we have
entirely lost sight of that if we ever have had that in the first place. This is not defensible for nearly all kid. This couldn't be, this couldn't be, this couldn't be too but it depends on the
intensity of interventions and the sacrifice we're willing to have to get. And I don't know the truth at that. That's a team decision. But that's what we're supposed to be hassling, haggling, arguing
about, and we don't do it. Okay. So all of these things could be suitable, could be appropriate. But what we wanted to be able to do is turn that word into a graph. If we turn this into a graph and
this line reduces the gap, we're in good shape. My advice is this. I want to write my goal tied to my logo normative performance. Not rate of improvement. Let me give you an illustration. Come back.
This kid, okay? Wow, this would be a score at about a fifth percentile of fourth graders. If I wanted this kid to read at the twenty-fifth percentile of middle of the year fourth graders, that's a
value. That twenty-fifth percentile will be a score, let's say around a hundred -- well, that would be a hundred words a minute correct. In one year the student will read grade four materials, a
hundred words a minute correctly. Would that reduce the gap? Okay. He's going to do better in more difficult material and the percentile rank will change, the score from the fifth percentile to the
twenty-fifth percentile is that a big change or a little change? It's huge. Percentiles are not equilateral data. It takes a lot to move at the extremes. I'm going to set a criteria for acceptable
performance based on a rational discussion of the percentile rank I'd like that kid to achieve to attain the gap. Well, okay. So, let me see if I -- I hate to do this haphazard because you just never
where you're going to get. Here's my illustration. Okay. Really simple, let's say this kid is the second percentile, okay, in grade three, 18 words a minute correct. I want this kid to be reading at
the fifteenth percentile. Ignore this because this is from the tier 3 part of the same basic principle applies. If I move the kid from the second percentile to the twenty-fifth percentile, is that a
lot of growth or a little growth?
AUDIENCE MEMBER: Well, that's a lot.
MARK SHINN: It's a lot. So I would write my goal, in one year the student will be reading and I like to round 65 words a minute correct. If somebody says "Where did you get that?" I'm going to say in
third grade material that would take that kid from the second percentile to fifteenth. Here's where I would look at those rates of improvement norms. Are you ready? More than a half. Can I do that?
You know, without even -- distinguished instruction, I could do that. But I'm a special ed program, so I'm going to expect more than that. So here's what I could do. I could look at -- let me go
back. I could look at -- I could look at this and say, "Hey, that rate of improvement, I can do better than that." I could take those data and say," Well, I was thinking about the fifteenth
percentile but I want to take that kid to reading 80 words a minute correct because that would be, you know, at the twenty-fifth percentile and that's going to reduce that gap even more and that's
two words, I should -- I should really be able to do that. So, I used the rates of improvement as the equivalent in the old days of the short time objective, okay? And that is what drives the goal
but contributes to understanding the goal, okay? So I don't know if that's clear as mud or not, okay? So that's -- I just, you know, having been in this business for a long time. Well, you know,
that's a half a world away and the kids in seventh grade, we will never expect much. Wow. Did you know with that lessons you can move faster at general reading achievement with a good program? If you
need kids motivated, you're -- and then the -- and the intervention programs, they're screaming good now, you know, it's not the old days. I mean, the intervention programs are really good and so the
-- with older kids, you can accomplish a lot. What you will have difficulty is reducing the knowledge gap, the vocabulary gap, and the stuff that comes from wide reading. How many of you work with
middle school kids with special ed? You can get them to become pretty good readers that grade, but much better readers, but they're still going to struggle because of those other cells up there,
right? They're going to struggle because of the language deficit, the knowledge deficit, and motivational deficit and you're going to fix all of that as -- like wow, that's huge. That's going to be
really difficult, okay? But it doesn't mean we can't make them become much better readers. And it also doesn't mean -- if we come back this afternoon, doesn't also mean that we can't learn how to
support those kids and kind of take care of instruction. Just as in the side, comprehension is a tier 1 problem. If you go see Daniel Willingham tomorrow, guess what he's going to do? Eight to ten
lessons, not eight to ten weeks, not eight to ten years. You see kids every year with comprehension goals to test your comprehension, strategy is a work, isn't it? Because guess where the
comprehension is going to come from, it's going to come from language, it's going to come from knowledge, and it's going to come from use of strategies supported by content-area team. And if you're
content-area teachers, what strategies are they supporting for understanding, anybody got any? One of the general ed classroom's been watched. There will be expectations. It's called assign, abandon,
and assess. That's content-area instruction. Understanding is supposed to be something that kids automatically know of. Okay. Goals said -- good, we got, like, no time, right? Crap. Excuse my
language. Here I was thinking we started at 10:30 and I have 15 minutes for questions. Oh, good. Well, anybody have any questions? Don't put that. We're done. Yes.
AUDIENCE MEMBER: So what is the maze do [inaudible]
MARK SHINN: Maze is...
AUDIENCE MEMBER: Maze is not...
MARK SHINN: Yeah, maze is a silent reading test. And let's say I want -- see, I have this problem of doing with kids -- well, we just picked some group kids to screen. I would much rather screen
everyone. And so let's say I've got pretty good end of the year six squares in my community, about the average or above and I want to screen everyone as the first test, I could use maze. They'd have
that, you know, do it in social studies, everybody is gone. And I would follow those kids that perform orally. I would follow those kids up with oral reading. But to me there's the advantage of -- as
a screener for older kids, that's pretty time efficient and I...
AUDIENCE MEMBER: Screening of what?
MARK SHINN: General reading.
AUDIENCE MEMBER: General reading.
MARK SHINN: General reading. Yeah. So, if I have -- and then I don't mind it as a once in a while progress monitoring tool in addition to oral reading for middle school or high school kid. Just as
another measure that's so low cost. I might do one -- once a month just to kind of see. They don't grow very much because it's not very sensitive within the student improvement, but it should grow if
the kids are improving. It's low cost, doesn't take a lot of time. But oral reading will almost -- always a [inaudible] okay? Well, I'm sorry because I really planned my time around this here. I
thought I was doing just fine. If you'd come back this afternoon, the first part will be sort of a little bit the same, but the second part will get this really foundational difference. The
foundational difference is once you get into high school, we have to adopt what's called the standards-based approach for our goals and that's very different. You don't have kids read compared tenth
graders or eleventh graders or twelfth graders. That makes no sense or whatsoever. We need to define the standard from minimum provisions so that would be the start of [inaudible] idea. Anybody wants
to chat, I'm just going to be hanging around, be walking in the lounge, if you want to do individual follow-up. Thank you much and I'm sorry I goofed up on my time.