Character Skills Snapshot Results

@Happytimes2001 I took the test around November and received my scores December 15

There’s a chart on the SSAT website that says when you get your results based in when you took the test.

I got emerging for everything. I think I screwed up the test; when I took it, I could never make a solid decision on which statements applied to me better so I chose a bunch of my answers that contradicted each other. I’m definitely not sending mine… and not to sound arrogant or anything but I know myself well enough to be really positive that it’s not accurate about me at all, to be honest. I’m not delusional, I know my strengths and weaknesses, and this thing is totally wrong.

In my opinion, it’s just an overvalued Buzzfeed quiz.

I wonder the same about whether kids are penalized and share similar concerns.

The Snapshot is like another test or assessment, but we are not allowed to know what the real score or scale is… Another discriminator/eliminator without context.

Results were not at all consistent with my daughter’s personality. Her strengths and weaknesses were flipped. Seems some places are really encouraging submitting this, especially this year since they piloted it last year. Any thoughts on whether it can really hurt if we don’t?

And to add…the results are not really consistent with what we have written in our parent statements about her! I know she was very frustrated taking the snapshot because of all the forced selection when in many cases none of the options were good or in other cases where more than one option was was the best. As someone said…quack stuff

I don’t have an opinion on the merit of the snapshot, but I did want to note to that these sorts of behavior based questions, where there are no clean answers and you have to choose among equally attractive or unattractive options, are typical. If the tests are well written, the profiles created are quite accurate. Again, I have no opinion or knowledge about whether the Snapshot is well or poorly written.

Took a while but the results came back. Though the strongest area was actually the weakest on the test, S/he decided to send the results since they were all in the highest category but 1. I think even if two had been developing or lower I would have suggested to hold off. Doesn’t appear that many have taken the test. It stated only 3,700 so that would mean maybe only those who are applying to top schools or those that require it. I think it may need a lot of work especially as you are testing middle schoolers who can’t really read between the lines.

I’ve taken two SSAT tests this year, and was looking to do the snapshot to see how I’d do, and whether or not I’d send it. When looking for it on the website, I was not able to find it. How do you find it?

DS scored in
Above 75 percentile in social awareness (Demonstrated)
Between 25-75 percentile in everything (Developing)
Below 25 percentile in Initiative (Emerging)
Have no idea how good or bad these scores are in terms of what they are looking for. Not sure if we should submit or skip…any advice?

For those of you applying via SAO portal - the parent section has a survey portion where you respond (click on circles) with ā€œDemonstratingā€, ā€œDevelopingā€, etc… for various social-emotional characteristics. Do you think they (SAO/SSAT) are using that to determine consistency in reporting? Perhaps using that to further validate this instrument? Just a thought…

Hardcastle: If you are signed into your SSAT page ( as a student) they can see on the left a tab to take the test. If you cannot find it, call them and have them take you thru it.

Golfr8: I think they are trying to find consistency in many respects ( recommendations, parental questions, interview questions). They are looking to see if the student in consistent in various areas. In one interview we sat next to a couple and their daughter. They were interviewing next to us in the waiting area. The man spoke the entire time about the daughter in the third person. He was really not humble at all and the things he said made me cringe ( materialistic, do you know so and so, etc). So maybe they give the parental interview to weed out ( or commiserate) with the student. You can tell a lot about a person in an interview.
They are also looking to see if there are any social/emotional issues which could create problems for the school down the road. It must be so hard trying to figure this out in 8th graders. Many don’t talk that much and the ones who are extroverted might say something that could be taken wrong. It’s a lot of pressure on them. I am glad we went through the process, however, since it will be a good thing to keep in mind when applying for college in four years.

I wouldn’t hide this report from any school. I also told my daughter before the test ā€œbe honestā€. She got 1 emerging, 4 developing and 3 demonstrating. We sent the report to all schools assuming it won’t affect the admission decision (otherwise it’s unfair since not all applicants submit it), it’s just a tool for the school to know you better. I asked the Hotchkiss interviewer if the result will affect their decision in anyway, they answered ā€œnoā€.

Bottom line is: if any school rejects my kid because of the result, that means my kid doesn’t fit into that school. Better know it now than late, I don’t want my kid to have a miserable high school experience.

Seriouskid did not complete it. It was not mandatory at any of the schools that were applied to…so I did not see the necessity to take the test. We did take the beta test to see what it was about once we received several emails from SSAT, and I found the questions to be misleading.

Having reviewed what little information is available about this Skills Snapshot, I have very serious methodological concerns about it:

  1. Forced Choice Conjoint Analysis, which is a significant portion of this battery, suffers from a variety of known issues…including the mean-variance confound, that occurs when either a person HIGHLY values one measured trait, OR when their preferences are VERY consistent…Ordinal responses simply can’t differentiate in the former case, and in the latter case, respondents may seem to undervalue certain traits simply because they are responding very consistently (e.g., because they clearly value other traits more highly and are being quite deliberate in their marks).

  2. The idea of taking individual personality traits and scaling them against a norm is one thing…it gives you a sense of where a given respondent falls in the spectrum of those who also took the battery in terms of their relative prioritizations of the choices…BUT…this battery goes a step further and tries to put normative labels on the skills, suggesting that they are either emerging, developing, or are already being demonstrated…based on seemingly little more than how highly a respondent values a trait relative to peers.

This can lead to some very odd results in corner cases. For example, a child who takes their time on the battery, trying particularly hard to be thoughtful and consistent, may actually HARM their score…because too much consistency in responses may lead to some skills being very highly rated, and others not…Controlling for this would be very difficult, and I’m not seeing anything in the materials available that suggest this has truly been evaluated as a risk. Similalry, a child who is ā€œoff the chartsā€ on one dimension and knows it, may see artificial decreases in other areas simply because their choices (including the tough ones that are designed to elicit ā€œpricesā€ in the forced choice conjoint) are always still ā€œclearā€ in the end…in essence, demonstrating too much consistency. Yet another example, assume a child who is exceptionally mature socially and in character…that child will find it very difficult to excel on every dimension…because…well, math…most students will find many of the questions ā€œhardā€ because the forced choice is designed that way…to force tough choices to create ā€œpricesā€ for those variables…but most kids will have some answers that vary a bit from choice to choice, and it will show in ā€œinconsistentā€ answers…this is fine/normal…but when one kid who is head/shoulders above another on one dimension is more consistent in their responses for their ā€œ4th most important dimension,ā€ that dimension will almost certainly end up falling below the 75th percentile of responses from others whose answers were more inconsistent…because…math…

The notion that ordinal valuations can be both placed on a normative scale AND that they can be used on a basis relative to other test takers…No…just no…something has to give in the methodology.

I’d love to see the test designer’s distributions of results…share of respondents by variable and rating. Some of the correlation coefficient data is out there, but I’d love to see actual ratings by dimension, and see how many kids are scoring 1,2,3,4…dimensions in Demonstrating etc…I have WAY more questions than answers at this stage about how this ranking is determined, how they know it’s relevant, and how they dealt with some very nuanced, but important issues, in the test design and validation.

Just got my daughter’s score back for this and I can’t for the life of my understand why any school would use it as a measure of anything. Her scores were extremely positive (all ā€œdemonstratingā€ except on one ā€œdevelopingā€), and I think largely accurate…but…Is it really possible for kids (or even adults, for that matter) to differentiate between what they know they SHOULD do and what they’d actually do? It’s very strange. It’s like the kids who are most honest will be penalized. It’s bizarre. At the least they should administer it simultaneously with the SSAT in an influence-free environment.