After enduring my first year of teaching in 2011, I was excited to take on year two. No longer just trying to survive, I approached the year with determination and a concrete plan. One component of that plan was to take band students to a large group contest for the first time in over a decade. After agreeing to host a contest and preparing my groups, I was anxious to receive outside feedback. We performed a Karl King march and Petite Symphony by Brian Balmages, and I felt the ensemble represented itself well. There were some missed entrances and poor intonation here and there, but their overall performance went as anticipated. Next was the clinic section, and once again the students responded well. As the contest organizer, I didn’t see our results until later: 37, 32, and 34. In Minnesota that amounts to a I, II, and II (one point away from a I), meaning we were one point out of 40 from getting an overall score of I. I was shocked. We had performed well and progressed greatly but, I could never in good conscience accept a I for our performance.
Some directors might be disheartened to receive a II at contest, and some of my students were, too. As we debriefed from the experience, we listened to judge comments and reflected on the entire process. One student raised a hand and expressed what many were thinking, “Why didn’t we get a I?” I told them that of course a I was our target, but asked if they had learned anything from watching so many bands yesterday and hearing the judge and clinician comments. Most timidly nodded, and then I asked if they knew what the II meant. In Minnesota, a II represents an Excellent rating and a I means Superior. I asked them, based on the comments we received and the other bands they heard, if they thought we deserved a superior. Some begrudgingly shook their heads, and then one said, “Maybe not, but I think we did good!” I replied, “You did better than that, you were excellent.”
If my group received a I at contest, our efforts would have been undercut. We improved over the past two years but were far from our destination. We took pride in our performances, and the program’s culture improved steadily. We felt we were beginning to achieve great things and could create something beautiful if we kept going. While I emphasized that the focus of this contest was on the comments from clinicians rather than a single rating, many would have taken a I rating as a clear indication that the band had reached its destination. That feeling would have diminished our attempts to push the ensemble further. This experience, many others since, and conversations with many wonderful colleagues has led me to one question: What can be done to enhance the large group contest experience and maximize the educational and musical benefits associated with it?
Contest reform is a complicated and hotly debated topic. While there is no consensus on fixing the process, many agree that reforms are needed. The focus on scores has eclipsed many of the educational merits of contests. How many directors prioritize the rating above the comments and critiques received at contest? How many bands perform Grade 5 literature and then implode sightreading a Grade 2 or 3 because they have been rehearsing the Grade 5 music for months? The chance to learn from expert adjudicators, observe other bands in performance, and perform for them in turn has devolved into an event more suited for a an athletic arena than the concert hall.
I do not have a silver bullet to fix contests but hope to promote a discussion that will someday elevate contests to the artistic and educational experience that our students deserve. Any conversation of reform should begin with a discussion of the desired outcomes of large group music contest.
Expert Evaluation and Feedback
Arguably the most significant goal of contests is to receive expert evaluation and feedback. A successful clinician will identify and emphasize concepts introduced by the instructor throughout the year and perhaps present them to the ensemble in a novel way. Furthermore, the clinician may identify areas for improvement that have eluded the director or had perhaps been avoided to devote time to other priorities. The comments and feedback are equally beneficial to the students and the director. By reviewing adjudicator comments over several years, a director can identify priorities, strengths, and areas for growth. In some ways, the evaluations comment on the musicianship of the director as much as the students.
Choosing Appropriate Literature
There is no shortage of articles, dissertations, and books on music selection. Still, nestled safely within the comforts of our home auditoriums, performing for friendly audiences, some directors eschew this responsibility. Absent a fine arts supervisor or a veteran colleague from a nearby school, a director is often the only person qualified to assess repertoire. Contests add an element of peer review that encourages directors to select more educationally sound literature at the appropriate level of difficulty. While a band might muddle through a subpar performance of a Grade 5 work at home, ensembles face a higher standard at contests.
Watching Other Bands
Most students do not hear other bands very often. There are plenty of excellent recordings, but seeing a performance live is special. At contests, students can hear several area groups perform. It is one thing to hear the Dallas Winds perform Blue Shades and quite another for students to see peers performing it. The experience can open eyes to what is possible in their ensemble.
Performance Opportunity
One frequently overlooked benefit of contests is having another chance to perform. As artists, our students work all school year to create beautiful sounds in harmony with classmates. For most schools, that equates to roughly 180 instructional class periods, plus hours at home polishing their craft. With so much time and energy devoted to making music, no opportunity to share music with others should be wasted.
Maximizing the Educational Value of Contests
With the goals listed above, what can be done to maximize the educational benefits of large group contest? What follows is an attempt to identify the challenges present within many contests across the United States along with some best practices observed from many outstanding colleagues and other suggestions to begin the discussion.
Rating Inflation
Not every band should receive a I. A II at contest is not only okay but considered excellent in many states. If our profession desires credibility in the educational community, it must acknowledge that rampant rating inflation is not in the educational best interests of our students. For any assessment to have validity, it must be honest and authentic. Without such assessment a primary contest goal is undercut straight out of the gate. Several factors have contributed to rating inflation. The first is an excessive focus on the number, which is often influenced by external pressure from administrators and parents. These factors build pressure on adjudicators to confer more top ratings. Consciously or unconsciously, adjudicators have learned that giving Is gets them hired back and giving too many IIs will get them labeled as a hard judge who doesn’t understand the local realities.
One remedy for rating inflation is a random audit of the Is awarded at each contest. Provided that each ensemble’s performance is recorded, each site coordinator randomly selects one of the performances that received a I rating. That recording goes to a statewide adjudication committee that blindly rates the ensemble. If the raters agree with the rating, nothing further needs to happen. If, however, the raters determine that the performance did not merit a I, a letter would be sent to the site coordinator and the judges encouraging them to recalibrate their expectations for each level.
Process over Outcome, Comments over Scores
As directors we can frame how students view the contest experience. At schools that consistently achieve excellent performances and high ratings, you rarely hear discussion about scores and ratings. These programs have embraced the philosophy that if you implement a sound pedagogical approach, the scores take care of themselves. Furthermore, most aren’t concerned with scores. They are there to receive expert comments that can help students progress as musicians. Creating an atmosphere that stresses process over product takes time. Students, families, and administrators lack the musical understanding to perceive incremental growth from one performance to another. This is one reason they cling so tightly to numeric scores. They may not understand balance, blend, and intonation, but they know what a I means.
Although some directors make the mistake of skipping this step, it is essential to share the adjudicator comments with students. Some contest are set up so the ensemble may receive twenty minutes to perform followed by a ten-minute clinic from an adjudicator. This approach allows students to hear comments directly from an expert and even implement their suggestions immediately. How many times has a clinician worked with your band, made a comment that mirrors what you have said a hundred times during the year, and inspired the band to a sudden breakthrough? Time with the adjudicator should be exactly that – a chance to emphasize concepts that have been covered and identify a few new ideas to focus on for the rest of the year.
Time constraints do not always allow for this kind of clinic, but playing back the judges’ recorded comments can accomplish many of the same goals. If we do not share the comments from contest with our students, we eliminate one of the primary purposes of the event. Furthermore, as directors, it can be helpful to create an executive summary of sorts for each ensemble, listing key points mentioned by adjudicators. Keep the summaries from year to year and then compare them. Are certain comments mentioned consistently? These can be great indicators of areas for improvement. A fresh set of ears might identify parts of our ensemble that we have become deaf to over time. Similarly, it can be helpful to recognize recurring compliments that show the strengths of your ensembles.
Even after shifting the focus from product to process, administrators and families likely will not have the ability to hear the musical improvements. If we seek authentic evaluations at contest, ratings will go down. If 80% of the bands at contest receive I ratings, we are not being honest. If more honest assessments do reduce scores, how can we explain to constituents that a II is still an excellent rating? Some other fine arts disciplines have removed the number system altogether. Many contests have a rating system similar to the one below, or it in inversion.
I – Superior
II – Excellent
III – Good
IV – Fair
V – Poor
In most contests, each numerical rating is accompanied by a descriptor. While it may be difficult convince an administrator that a II is a good rating, a performance described as excellent is something they can be proud of and understand. By eliminating the numerical rating and embracing the descriptors we can adapt to the drop in ratings as programs adjust to more accurate ratings.
Adjudicator Training
Rating inflation is closely tied to adjudicator training. As with music education, merely being a good musician is not enough. Excelling as a musician and excelling as an adjudicator, while complementary, are separate skills that must be learned, practiced, and maintained. Many state organizations provide helpful materials and guidelines for adjudicators. Some key points highlighted in the handbook created by the Missouri State High School Activities Association and directed toward adjudicators include:
• Music festivals are for the students. Address remarks to the performers
• Music festivals must be an educational experience. Performers should benefit from your musical expertise. Your comments should help them build, refine, and grow as musicians.
• The rating must be secondary. If you were to remove the check marks and ratings from the form, what is left for the performers? You have been hired for your educational expertise and largely for the comments you provide.
• Be warm, Be friendly, Be fair and understanding. All participants will be nervous in the performance center. Attempt to alleviate this stress.
• Use the Three C’s Approach – Compliment, Criticize, and provide a Course for Improvement.
• Assume every performer is presenting their best efforts, return that in kind.1
The National Federation of State High School Associations recently created an interactive online course “designed to provide an overview of music adjudication and the best practices for many adjudicating situations.”2 The course, titled Introduction to Music Adjudication, is free to all music educators following a short registration.
One resource that would improve adjudicator training is a set of reference recordings of a prominent work performed at each rating level. The recordings would assist judges in establishing a consistent level of what a performance at each rating should be. For example, a judge could be given five recordings of Gustav Holst’s Suite in Eb. One recording would be at a level considered a I: Superior, another at a II{ Excellent, III: Good, and so on. The ratings for these recordings could be established through a representative committee to create a unified consensus of what each rating represents. Such recordings could unify ratings across the country, providing consistency from state to state.
Encourage Students to Watch Other Bands
While the ensembles on stage at music contest might not be professional ones, students can learn plenty through observing their performances. Some directors provide students with rubrics or rating sheets similar to the ones used by the judges to promote active listening. This approach encourages students to listen critically to the ensemble on stage and stresses concepts that they will be evaluated on later in the day. These sheets can lead to fruitful classroom discussions about the strengths and weaknesses of other groups and how they relate to the strengths and weaknesses their ensemble.
For many schools, the largest obstacles to the strategies above are scheduling and busing. For some districts it might not be possible to keep students from other classes any longer than necessary. However, armed with the information about the value of watching performances, your administration may be more receptive to allowing students to stay and observe than assumed. As the saying goes, the worst they can say is no.
Contest Ratings as Teacher Evaluation
There is a growing trend in music education in the United States for incorporating contest ratings into teacher evaluation systems. In the absence of standardized testing for music ensembles, many administrators use contest ratings as a third-party evaluation of teacher effectiveness. Such decisions exacerbate the already substantial emphasis on contest scores. Furthermore, rater reliability and consistency in most regions is not high enough to give this method of evaluation credibility. A 1993 study in the International Journal of Music Education found that ratings were significantly influenced by adjudicator training and familiarity of the repertoire being performed.3 Other studies found ratings to be influenced by the length of the contest day,4 performance order,5 labels such as concert band or wind ensemble,6 and even conductor expressivity.7
By incorporating effective adjudicator training and other recommendations listed above, our profession can improve the accuracy of large group contest ratings, but the rating is only one snapshot of a moment in time. If one desires to place more weight on contest ratings as a measure of teacher effectiveness, then one strategy would be to assess the long-term trends by considering at least two years of data, and ideally as many as five years.
Conclusion
The potential educational benefits of large group music contests are vast. Contests provide an opportunity for our students to be heard and receive comments from expert music educators. They remind directors of the immense importance of repertoire selection for our ensembles and provide expert feedback for directors in the form of recurring comments from year to year. Our students are exposed to performances by schools from the surrounding area, which helps develop their critical listening skills and demonstrates to them all that is possible for their ensemble. As our contests around the country stand, several obstacles of varying origin and complexity are infringing on these outcomes, but there are ways to improve the educational value of large group contest for your students, transforming the experience into the educational and musically fulfilling event that our students deserve.
***
Notes
1 Adjudicator Instructional Booklet for the Missouri State High School Activities Association (www.mshaa.org).
2 Introduction to Music Adjudication by the National Federation of State High Schools Association (www.nfhslearn.com).
3 “Music performance assessment: A study of the effects of training and experience on the criteria used by music examiners” by N. Winter (International Journal of Music Education, No. 22, 34-39).
4 “Practices and procedures in state adjudicated orchestra festivals” by G.V. Barnes and R. McCashin (Update: Applications of Research in Music Education, No. 23 Vol. 2, 34-41).
5 “The Queen Elisabeth musical competition: How fair is the final ranking?” by R.G. Flores and V.A. Ginsburgh (Statistician, No. 45 Vol. 1, 97-104).
6 “The effects of band labels on evaluators’ judgments of musical performance” by B.A. Silvey (Update: Applications of Research in Music Education, No. 28 Vol. 1, 47-52).
7 “The effect of conductor expressivity on ensemble performance evaluation” by S.J. Morrison, H.E. Price, C.G. Geiger, and R.A. Cornacchio (Journal of Research in Music Education, No. 57, 37-49).