Multi-Dimensional Operations and Analysis (MDOA Solutions)

Enterprise Reciprocity Diagnostic

A scientifically structured 28-item scan across seven service domains and four reciprocity dimensions. Complete the diagnostic for an instant domain heatmap, the Enterprise Reciprocity Quotient (ERQ), and a concise interpretive brief that guides next steps.

Dr. Wayne Romanishan Jr., Psy.D.
Learn about the Reciprocity Engine™
How to use this diagnostic: Use the 7-point sliders for each statement (1 = Strongly disagree, 7 = Strongly agree). If uncertain, use the tooltip guidance or select the midpoint. The scan is confidential and intended to prompt professional follow-up.
Tip: Be candid. The diagnostic is robust to uncertainty, midpoints capture lack of evidence and become diagnostic flags.
If unsure, choose a midpoint (4). Evidence: measurable KPIs linked to daily tasks.
Consider whether roles/escalations exist and are enforced.
Look for documented role descriptions and examples of consistent messaging.
Evidence: impact assessments, stakeholder reviews, pilot tests.
Consider observable workarounds and error-prone steps.
Are people able to focus? How fragmented is attention?
Measure time-to-competence and frequency of support calls.
Does leadership ask "why did the system allow this?"
Look at hiring metrics and early attrition.
Consider mentoring, succession planning, visibility of paths.
Are incentives tailored across cohorts?
Not just pay—acknowledgment, visibility, autonomy.
Consider refactor rates, rework hours, and handoff clarity.
Are escalation rules and prioritization rituals consistent?
Look for sprint predictability, lead time measures.
Assess whether teams are cross-trained or siloed.
Are documents versioned and discoverable?
Consider audit trails and role-based controls.
Measure local downloads and uncontrolled copies.
Integration between tools and human workflows is engineered, not ad hoc.
Transparency and empathy in communication are key signals.
Are success metrics defined and reviewed?
Consider coaching, Q&A sessions, "why" artifacts.
Retro practices and lessons learned are institutionalized?
Are analytics contextualized and ethical?
Look for anonymized aggregate use and human-centered interventions.
Are models validated to avoid misattributing causes?
Is the org proactive or reactive?