lectures.alex.balgavy.eu

Lecture notes from university.
git clone git://git.alex.balgavy.eu/lectures.alex.balgavy.eu.git
Log | Files | Refs | Submodules

Evaluation of Websites.md (2486B)


      1 +++
      2 title = 'Evaluation of Websites'
      3 +++
      4 # Evaluation of Websites
      5 ## Why evaluate?
      6 
      7 - attract more visitors
      8 - sell more products
      9 - decide which web app to use
     10 - improve visitor ratings
     11 - etc.
     12 
     13 ## Evaluation studies
     14 
     15 - always start with a clear research question
     16     - aka "problem statement"
     17     - practical/theoretical relevance, feasible
     18     - types of studies
     19         - explorative — what is related?
     20             - e.g. why do people visit the site again?
     21         - descriptive — what happens?
     22             - e.g. how many people find the site through a SE?
     23         - explanatory — why does it happen?
     24             - if you add login, do people visit the site again?
     25             - does a change in structure make it easier for visitors to find what they're looking for?
     26 - make a hypothesis
     27     - a prediction of outcome of test
     28     - deduced from theory or observations
     29 - collect data
     30     - e.g. in lab experiments, survey, interview…
     31     - qualitative (non-numerical) and quantitative (numerical)
     32     - test dependent var with respect to independent var
     33     - specific population (customers/all web users/registered users/whatever)
     34     - specific sample (random/convenience/volunteers)
     35 - evaluation methods
     36     - common
     37         - mockups
     38             - low fidelity — early in design phase, only basic functionality, static, cheap. focus on concepts.
     39             - high fidelity — later in design phase, refined details, expensive.
     40         - prototypes — working example of website
     41         - focus groups — moderated group discussion, early in design stage.
     42         - card sorting — group of people sort items into clusters to get intuitive structure for website
     43         - usability inspection — go systematically through website, check against Ten Web Guidelines. performed by dev team.
     44         - group walkthrough — group of people walk through website as if performing primary tasks
     45         - user testing — remote, observe user while primary tasks are performed. log actions, eye tracking, record video/audio.
     46         - survey
     47 
     48     - specific to web evaluation
     49         - web analytics
     50             - analyse logfiles
     51             - JavaScript page tagging to capture visitor data
     52             - very good objective data, but privacy concerns and no insight into motivation or unvisited pages.
     53         - online experiments
     54             - distribute visitors over versions, see which performs better
     55             - after release
     56             - example: A/B testing