EML2322L – Concept Selection Grading Tips
Erroneously Evaluated Objectives
Modification to Existing Designs
Purpose of this Document [RETURN
TO TOC]
This
document outlines the methodology by which TAs should assess their groups DR2s
in the hope of normalizing the grading procedure. Problems commonly seen on reports are
explained, accompanied by suggested corrections and point deductions. Subsequently, an ungraded example DR2 is
attached, followed by its respective graded version. Prior to grading their first DR2, first
semester TAs should practice grading this example (without referencing the
key), and then compare their markings with the graded example.
REMEMBER: When
grading, be less concerned with grammatical and formatting errors than with
content problems. But if poor sentence
structure and syntax errors are common and make the report difficult to
understand, be sure to note that and provide helpful feedback. When making notes on the report, ensure your
writing is clear and legible, so students can easily understand the issues you
are identifying. Do not make any mean or
disparaging comments on the reports. Be
careful to not give excessive praise as well, because then it will be difficult
to deduct points from any issues you may have missed.
NOTE: Due to the
wide variance among DR2s submitted every semester, this document could not
possibly address every issue that could arise when evaluating this
assignment. Therefore, it is important
to develop and use your intuition for situations where the correct grading
procedure is vague. If you have a
feeling something is flawed in the report, do not be afraid to mark it as
incorrect. If an objective is difficult
to understand, or you feel as if the objective does not help in the assessment
of the robot component, mark it incorrect. If you still have doubts, ask an older TA or
Mike. Similarly, please bring
recommendations for additions or modifications to this document to Mike so it
can be continually updated and as effective of a resource as possible.
Good Objectives [RETURN
TO TOC]
Correct objective justification requires a clear definition of the
objective being evaluated, a clear explanation of why this objective is
important for the project’s success, an explanation of how the objective is
assessed, and a justified weighting factor for the objective. If using qualitative objectives, be sure to
consider how the objective(s) will be tested.
Common objectives that should appear in all matrices in EML2322L are:
manufacturing time, cost, and modularity (or size).
Bad Objectives [RETURN
TO TOC]
Unfortunately, decision matrix objectives are often poorly defined
or constructed. Common problems include:
testing criteria or methods that do not correlate well with the objective being
tested, objectives that do not differentiate between individual designs in a
meaningful way, and objectives that have little relevance to the actual
performance of the subsystem. If you have a feeling your group’s objective
is illogical in some sense, mark it incorrect or discuss it with another TA.
1. An
example of a poor objective for a ball or bucket manipulator is speed. While it makes perfect sense to assess speed
for a mobile platform subsystem, doing so for a manipulator rarely provides
meaningful insight. A much better
assessment would be the time required to manipulate the object, as this allows
direct and quick comparison of the concepts under consideration.
2. Another
example of a poor objective for a bucket manipulator could be bucket
security. Many concepts use a fork made
out of 80/20 that contacts the bucket on both sides, and is lifted via a Globe
motor. A bucket security objective is
often employed with the purpose of determining which designs are least likely
to drop the bucket after grasping it.
The metric used might be the amount of surface contact between the
bucket and the gripper. Each design was
modeled in SolidWorks and then brought into an assembly with a bucket where the
contact area was measured. The way they
calculated surface contact is commendable, however this attribute does not
necessarily correlate strongly with the purpose of the objective. More surface contact between the manipulator
and the bucket does not always equate to higher security, since there are
several other variables at play (e.g. where the contact is on the bucket). For these types of assessments, TAs should
encourage groups to pursue objectives with data that emanates from physical
testing as opposed to calculation because testing procedures where prototypes
of each design are made and then evaluated on their observable performance are
generally preferable. In this example
the TA should not dismiss the students’ previous work, but encourage them to
supplement their results with physical testing.
3. An
example of a poor objective for a ball launching mechanism is shooting speed
when evaluated on a linear scale. Many
shooting mechanism concepts consist of wheels attached to high speed motors
that fire balls along a track towards the target, similar to baseball pitching
machines. The intention of considering
shooting speed is to ensure the balls will have enough energy to reach the
target, which is an important objective to evaluate for a ball launcher. However, regardless of the selected
motor/wheel combination, several concepts may have the ability to fire a ball
more than fast enough to hit the target.
But, since the objective is assessed simply on the speed at which the
ball is launched, the highest speed motor would be awarded more points in that
category, even though it does not offer ANY added benefit over other
motor/wheel combinations of similar design.
Instead, groups should use a more results-based testing procedure (e.g. a test where they launch 5 balls with
each design concept and count how many fall). This way, the objective differentiates
between other types of designs where ball launching energy is a notable issue
(like a gravity-fed mechanism) without differentiating between designs that
have comparable performance.
Dependent
Objectives (i.e. Objectives too Similar) [RETURN
TO TOC]
Another common issue is objectives that are too similar to each
other; that is, two objectives that evaluate the same characteristic or feature
of the design. For an effective decision
matrix, a design’s performance in any given objective should be independent (or
as independent as possible) of its performance in other categories.
Oftentimes
the objective definition can merely be reworded to fix this issue. If the group can provide a compelling
argument in their justifications as to how each testing procedure for each
objective measures something different about the designs then it should usually
be acceptable. Still, it is up to the
grading TA to decide if the objective definitions/testing procedures are
satisfactory in any given situation.
4. A
common example of dependent objectives can be found in the manner in which
speed and controllability are evaluated for a mobile platform. Groups typically use their motor/wheel
combination to evaluate speed, and some form of obstacle course to evaluate
controllability. However, oftentimes the
obstacle course is composed in a way consistent to “drive forward 5 meters, around a few buckets, and back, where the
fastest time wins”. The performance
of a design in this controllability assessment will be extremely correlated
with (i.e. dependent on) its performance in the speed assessment due to the
testing method. A much better way to
evaluate controllability would be to design an experiment (e.g. an obstacle
course) to measure controllability where the mobile platform never has the
opportunity to reach higher speeds that would bias the controllability
objective being assessed. This example
does not mean speed and controllability are always inadmissible, but rather to
just try and keep the definitions and evaluation procedures as independent as
possible.
Erroneously
Evaluated Objectives
[RETURN TO TOC]
Sometimes objectives are just erroneously evaluated, which is why
it’s important to decide on objectives as a group and to proof each other’s
work very thoroughly.
5. An
example of an erroneously evaluated objective is when groups define modularity
the same way as the mobile platform matrix example
(e.g. the number of fasteners removed to disassemble a subsystem for storage
each week in the project box), but then evaluate it as the total number of
fasteners in the subsystem.
Objective
Weighting Problems
[RETURN TO TOC]
In many instances, objective weighting problems stem from the
performances of the evaluated designs in their objectives. For example, mobile platform designs commonly
perform similarly on objectives such as manufacturing time. Therefore, the weighting assigned to this
objective should not be very high, since the objective does not provide
profitable information for the selection of the design. It does not make sense for something like
this to be weighted too heavily, since doing so makes it more difficult for
other objectives to determine the superior design. The weighting factors also need adequate
justification in the objective definitions, which includes a clear explanation
as to why each objective is more or less important than others.
Also, if objectives given for one subsystem are measurable for
others, they should usually be included in those matrices as well. For example, if groups specify weight as an
objective for a ball hopper because they want to reduce the overall weight of
their robot, then weight should be included for all other subsystems as
well. If the goal of the objective is to
reduce the overall weight of the design, why would it only be important for one
system? In the previous example, if the
group defines the need for weight to be an objective because it benefits the
individual performance of that subsystem somehow (not because you are trying to decrease the overall weight of the
robot), then it does not need to be included in other system’s matrices,
even though it is still a measurable quantity for those components.
Non-linear Scoring [RETURN
TO TOC]
The design
report template encourages the use of linear score assessments whenever
possible, however objectives are sometimes submitted that use alternate scoring
methods, such as binary or target value scoring. Although these means are not encouraged, they
are not always inadmissible. In some
instances, it will be your job to assess if their methodology is
acceptable. Your main criteria should be
to determine if the procedure they employ provides logical and meaningful
differentiation between the individual concepts.
If a target
value is specified, an appropriate explanation of why that value was chosen is
required. Scoring should follow a linear
pattern in both directions away from this value. Groups tend to use binary objectives to weed
out designs that cannot perform a task critical to the success of the
subsystem. For example, groups may
calculate drive wheel motor torque and use it to assess whether or not it can
traverse a ramp; those that can receive a score of 10, and those that can’t
receive a zero. Although the reasoning
sound (i.e. the group is trying to ensure their robot will be able to enter the
arena), encourage them to use their calculations to further differentiate among
the designs beyond the confines of ‘yes’ and ‘no’. A suitable way of doing this would be to
establish a minimum passing value (justification/calculation needed) under
which all designs receive a zero, and then score linearly from that point. This way, the objective still serves to weed
out weaker designs, however further differentiates others to account for random
variability come competition day.
Other
Things to Watch out for [RETURN
TO TOC]
So far the
discussion has centered on typical problems found within the objective definitions
and justifications. This is where most
main issues in the design reports are found, however there are other common
mistakes you should watch for when evaluating the report. Quite often, values reported in the appendix,
objectives, and matrix will not match each other. Be sure to make sure that weighting factors
sum to 1 and that there are no math errors when calculating scores. Appendix A
is often missing in depth descriptions of testing procedures, evidence of
testing, and data tables showing how each design performed. These are all needed for each objective that
makes use of physical testing.
In addition,
students often have trouble understanding significant figures in the values
reported in the evaluation matrices.
Each objective should have an appropriate number of significant figures depending
on its implied accuracy or resolution, and all objectives will NEVER have the
same number of significant figures. Magnitude
scores like maneuvering or manipulation times should never be reported more
accurately than the nearest second (i.e. 31 sec, never 30.6 sec). Integer parameters like modularity or capacity
(i.e. number of balls) should never be reported with a decimal, as it makes no
sense (i.e. 4, never 4.0). And material
cost should always be reported to two decimal places because that’s the
standard for currency (i.e. $12.50, never $12.5).
Modification
to Existing Design(s)
[RETURN TO TOC]
As noted in
the updated DRT, if a student makes a design choice in DR1 that can’t work
(e.g. selecting a Buehler motor as a drive wheel motor), they DO NOT have the
option of giving their concept zeros in the matrix. Instead, the student MUST present a modified
design that can be scored against the other concepts. The updated concept replaces their DR1
concept.
Addition
of New Designs
[RETURN TO TOC]
Groups often
come up with new designs or modifications to existing designs in between DR1
and the submission of DR2. If this occurs,
a sketch of the new design should be included following the objective
definitions for that subsystem, as well as a short explanation of why a new
design was added. However, if the new
design is of the same structure of another and consists of mainly dimension
changes, groups can merely photocopy the initial drawing and manually change
the necessary dimensions instead of creating a whole new drawing.