The Texas A&M University System delivers a common set/embraces a common view of important outcomes and is accountable for sustained measurement.
For all TAMU System universities, the rationale for assessing student learning outcomes (SLOs) originates primarily from efforts to maintain institutional effectiveness, which is defined as a process of identifying outcomes, assessing the extent to which they are achieved, and providing evidence of improvement based on their analysis.
Upon completion of their degree program, students will be able to synthesize knowledge from general and specialized studies.
All criteria met and results exceed expectations with little room for improvement.
Most criteria met and results indicate mastery of objective with some room for improvement.
Acceptable number of criteria met and results meet expectations with room for improvement.
Some criteria met and results indicate need for improvement.
Few criteria met; results indicate need for significant improvement or no/insufficient results reported to measure performance of objective.
National Survey of Student Engagement: First-Year Experiences and Senior Transitions Module Item 6. "How much confidence do you have in your ability to complete tasks requiring the following skills and abilities?"
6a. Critical thinking and analysis of arguments and information (weighted mean: 3.4);
6b: Creative thinking and problem solving (weighted mean: 3.5).
Responses from participating seniors showed that their engagement in these areas were equal to the weighted mean for the comparison group, which is the existing target for NSSE items.
National Survey of Student Engagement: Item 2. "During the current school year, about how often have you done the following?"
2a. Combined ideas from different courses when completing assignments (weighted mean: 2.9);
2b. Connected your learning to societal problems or issues (weighted mean: 2.7);
2g. Connected ideas from your courses to your prior experiences and knowledge (weighted mean: 3.2).
Responses from participating seniors showed that their engagement in these three areas were at the weighted mean for the Southwest Public institution comparison group, which is the existing target for NSSE items.
One of the actions taken to promote greater participation in NSSE during the spring 2017 administration was to include direct messaging to targeted students through the Learning Management System, as well as offering a series of incentives for participation. As a result, WTAMU experienced a 10 percentage point increase in first-year student participation (13 percent in 2016 to 23 percent in 2017) and 12 percentage point gain in senior student participation (17 percent in 2016 to 29 percent in 2017). These efforts have WTAMU nearing the existing participation target of 30 percent for both first-year and senior student participation.
ETS Proficiency Profile Context-based Sub-domains:
- Social Sciences
- Natural Sciences
Humanities: Mean score of 118.23 (range 100-130) is slightly above the 50th Percentile (118) for all test-takers;
Social Sciences: Mean score of 113.14 (range 100-130) is just above the 50th Percentile (112) for all test-takers;
Natural Sciences: Mean score of 116.14 (range 100-130) is just below the 50th Percentile (117) for all test-takers.
The mean Total Score was 447.03 (range 400-500), which is above the 50th Percentile (441) for all test-takers.
There are two systemic actions that are under development that will benefit not only assessment of the Texas A&M University System Student Learning Outcomes, but also the Texas Core Curriculum objectives:
1. Creating a stand-alone "general education assessment committee" tasked with stem-to-stern oversight of assessing the Texas Core objectives and the TAMUS SLOs.
2. Developing a robust analytics website designed with interactive visualizations and dashboards to publicize university-wide assessment results and provide access to data sources for use at the program level. For example, WTAMU has an extensive history of results from the NSSE, as items are being mapped to institutional function areas so that programs can incorporate additional indirect assessment data into continuous improvement efforts.
The University uses both the CAAP and EPP on a biennial basis to assess the basic broad knowledge of our students across the core curriculum. Specifically, the skill levels of reading comprehension (RDNG), writing usage/mechanics and rhetorical skills (WRTG), mathematical reasoning (MATH), and critical thinking or analytical reasoning (CRIT) are assessed and compared to national norms. In addition, we collect student products in the upper disciplinary courses and score using common VALUE rubrics from AAC&U.
Sufficient. Testing 676 students in 19 different classes in the core curriculum, the following was found: CRIT National Average 59.8, WTAMU 60.2 (+0.4). MATH National Average 58.6, WTAMU 58.7 (+0.1). RDNG National Average 60.6, WTAMU 59.2 (-1.4). WRTG National Average 63.1, WTAMU 61.3 (-1.8). Differences of 0-5 are considered negligible and not statistically significant. Growth from Freshman to Senior populations were as follows: RDNG +2.2, WRTG +1.3, CRIT +2.2, MATH -0.5.
During the academic year 2013-2014 CAAP administration, a representative University sample showed our students scoring above the national norms in mathematical reasoning and critical thinking or analytical reasoning and below in writing usage/mechanics and rhetorical skills, and reading comprehension. There was growth in three of the four skill levels tested from the freshman level to the senior level, with the exception of mathematical reasoning. Through data analysis with faculty, it was theorized that lesser skilled senior mathematics students waited to take these required math classes at a later time in their matriculation process due to possible phobias. Regardless, the negative difference on the test of the two groups was negligible.
The first implementation of this methodology was conducted in the Spring 2014. Hence, it was recommended that no abrupt changes take place based on this data until the next CAAP administration. That administration was conducted November 3-14, 2014, and those results will be compared with the previous administration and necessary actions will be taken for learning improvement.
The University will use the EPP to provide similar measures as the CAAP for our online student population. The first sample will be taken Spring 2015 and results will be reported with the next cycle.