Paul Harrison's blog

homeblogmastodonthingiverse



~ optimize for diversity ~

20 November 2021, 1:22 UTCDelayed webcam mirror

[permalink]


9 November 2021, 6:23 UTCGhostsurn

I have been working on a successor to my tile layout app Ghost Diagrams, which I am calling Ghostsurn. The novel feature of Ghostsurn is finding not just a valid example of a tile layout, but a properly random sample from the set of all possible layouts.

[permalink]


10 April 2021, 5:53 UTCWe've been doing k-means wrong for more than half a century

(previously)

Updated 2021-06-04: The k-means++ implementation I was using previously appears to have been flawed. I've updated results using a better implementation.

The above report focusses on R. @ctwardy has replicated the basic result here and done some further exploration in Python.

Updated 2021-06-19: Added Appendix 2, sketching an argument that the asymptotic density of k-means++ is optimal.

[permalink]


27 September 2020, 0:23 UTCk-means the diversifier, the deviralizer

For a collection of points, the k-means algorithm seeks a set of k "mean" points minimizing the sum of squared distances from each point to its nearest mean. k-means is a simple way of clustering data. It has a fast approximate algorithm to find a local optimum, but this might not be sufficient for the application I am talking about here, which needs something like a truly global optimum. It can also be viewed as a way of approximating a dataset using a smaller number of points, even if it does not consist of distinct "clusters".

I've recently become interested in the behaviour of k-means for large k. What is the distribution of the means compared to the original distribution of vectors, as k becomes large but assuming n is always much larger?

In one dimension Wong (1982) has shown that the density distribution of means is proportional the cube root of the original density distribution. Raising a density to a fractional power such as here 1/3 has a flattening, widening effect. Peaks are lowered and tails are fattened. After some rough calculation (see end), in d dimensions I believe the distribution will be proportional to the original distribution to the power d/(2+d). Altering the distance metric (k-medians, etc) I think will result in a different power.

So, for any collection of things where we have a notion of distance, k-means provides a way to flatten and summarize the distribution. (If we don't want to interpolate, we could limit means to members of the original collection.)

In a collection in which most of the variation is only in a subset of dimensions I think the effective d will depend on k. For example, for moderate k the effective dimension might be 1 or 2, but for large k one gets into the fine structure, the effective d rises, and the flattening effect is reduced.

This idea of flattening a distribution seems useful, so an algorithm that does it is exciting:

It also seems reminiscent of Wikipedia pages, which tend to cover all the major opinions, not including wild theories but also not entirely focussing on the dominant theory.


Update 2020-10-24: I've applied this idea by clustering 2020 bioRxiv abstracts. This uses a "greedy" variant of k-means where means are optimized one after the other. In other words, it is an ordered list in which topics become progressively more specialized. The algorithm I used also has fairly good ability to escape local optima.


Further note 2021-03-23: The usual k-means algorithm performs very poorly at finding the global optimum, or even at producing clusterings with the expected properties I have described the global optimum as having. Some improvement may be obtained by initializing cluster membership using Ward agglomerative clustering. In R, use fastcluster::hclust.vector() followed by cutree().

In one dimension, obtain the exact global optimum with Ckmeans.1d.dp::Ckmeans.1d.dp().


Appendix: A very rough examination of what happens in d dimensions.

Consider two d-dimensional unit hypercubes, containing n1 and n2 points respectively.

Out of k, how shall we allocate the means to the two hypercubes, k1, k2=k-k1?

Within a hypercube, the distance to the nearest mean will typically go proportional to k^(-1/d).

So within a hypercube the sum of squared distances will go approximately like

SS = c (k^(-1/d))^2 n
   = c k^(-2/d) n

where c is some constant.

Within two hypercubes we would have

SS = c k1^(-2/d) n1 + c (k-k1)^(-2/d) n2

Assuming k is large enough that we can treat it as effectively continuous, find the minimum by differentiation:

dSS/dk1 = c n1 (-2/d) k1 ^(-2/d-1) - c n2 (-1/d) (k-k1)^(-2/d-1)

Set dSS/dk1 = 0
=> n1 k1^(-2/d-1) = n2 (k-k1)^(-2/d-1)
=> (k1/(k-k1))^(-2/d-1) = n2/n1
=> ((k-k1)/k1)^(2/d+1) = n2/n1
=> (k2/k1)^(2/d+1) = n2/n1
=> k2/k1 = (n2/n1)^(1/(2/d+1))
=> k2/k1 = (n2/n1)^(d/(2+d))

This shows how k means will be allocated between two regions of differing density. Between more regions, each pair of regions will be balanced in this way.

[permalink]


10 March 2020, 19:22 UTCSlides from a topconfects talk at WEHI

This is a somewhat extended talk I gave at the Walter and Eliza Hall Institute. It goes into some more details about how confect values behave. The gene-set enrichment section is also an improved method that uses an effect size that is a linear function and no longer needs bootstrapping.

[permalink]


30 November 2019, 23:14 UTCSlides for topconfects talk at BiocAsia 2019

These are slides for a 15 minute presentation on my topconfects Bioconductor package, for BiocAsia 2019.

New material for this presentation is the application to gene set enrichment measurement.

[permalink]


21 August 2019, 23:43 UTCSlides from a talk putting Topconfects in context

This slideshow places my Topconfects method in the wider context of the current debate over the use of p-values.

[permalink]


7 August 2019, 22:25 UTCE(f(rarefied count)) for consistently biassed transformation

This is a small improvement on the log transformation we use in RNA-Seq and scRNA-Seq.

[permalink]


16 February 2019, 21:03 UTCLorne Genome 2019 poster - weighted principal components and canonical correlation with single cell data

Poster for Lorne Genome conference 2019, looking at single cell data where each gene can produce two measurements: RNA expression level and choice of polyadenylation site. We're not exactly sure what the correct tools for analysing this data are yet, this poster is plays with weighted principal components and canonical correlation. I'm interested in expanding my use of multivariate techniques, there are whole histories of unfamiliar techniques, such as techniques from ecology and Exploratory Factor Analysis methods used in psychology and marketing. Apparently multivariate techniques are particularly popular in France.

[permalink]


27 December 2018, 6:32 UTCRecommender systems and the viral bubble

People worry about being trapped in a filter bubble, but I have a different concern. Amongst content with a viral coefficient close to one, the amount of attention equivalent content receives is highly variable. That is, we are all sucked into the same viral bubble, collectively seeing some things and missing others of equal merit. Furthermore we tend to see viral content over content of more specific interest to us.

Recommender systems -- now commonly called "algorithms" -- have the potential to enhance or reduce this effect. Recommender systems as applied to social network content are a little creepy, but also necessary as people build up large numbers of people to follow over time. It is important to see the announcement that a distant acquaintance has cancer, but not the latest cat picture they found funny. With this necessity, perhaps the best we can aim for is that people to have control over their algorithm, rather than being forced to take what Facebook or Twitter (etc) provide.

Recommender systems come in two forms:

I include systems which only have a "like" but no "dislike" rating such as Facebook among implicit systems, even though they take direct user input. However it might be that Facebook tracks exactly what it has shown a user, which would bring it closer to an explicit recommender system.

The problem with implicit recommender systems is that they are necessarily biassed by exposure: you can only like or consume something you see. Explicit recommender systems do not necessarily have this problem.

Some regularization is probably needed in a practical explicit recommender system to avoid being swamped by new content with few ratings. Compare "hot" and "new" on Reddit. Without regularization, a newish post on reddit with a single vote (other than by the author) will have an unbiassed estimate of the upvote proportion that is either 0% or 100%. Regularization introduces bias, but this can at least be dialled up or down.

One useful observation is that an explict recommender system can use implicit data from other people and still have low bias. The dependent variable is a specific user's ratings. We need "Missing At Random" (MAR) data for this, which means data in which the missingness is not dependent on the rating given the independent variables. Any information that helps predict missingness can be used as an independent variable to reduce bias and increase accuracy.

Having the choice on social networks to use an explicit recommender system algorithm with a bias dial is an important freedom we currently lack.


Notes

-- The terms "bias", "regularization", and "Missing At Random" here have technical meanings.

-- njh points out these systems are often thought of in terms of multi-armed bandits. A multi-armed bandit has a record of exactly what it has shown the user (what levers it has pulled), so it is an explicit system with the potential to manage bias. The bandit concept of exploration/exploitation trade-off may be a better way of thinking about what I've called regularization.


[permalink]



13 October 2018, 0:27 UTCBall hypothesis tests
5 October 2018, 11:17 UTCWeighted least squares
9 March 2018, 10:59 UTCDetermining the sign of an effect size is quite similar from Frequentist and Bayesian perspectives
8 November 2017, 5:34 UTCTopconfects talk
28 October 2017, 3:55 UTCScatter plots with density quartiles
29 May 2017, 23:28 UTCMelbourne Datathon 2017 - my Kaggle entry
1 April 2017, 22:58 UTCDiagrams of classical statistical procedures
16 March 2017, 23:29 UTCFinding your ut-re-mi-fa-sol-la on a monochord, and making simple drinking straw reedpipes
18 October 2016, 5:53 UTCShiny interactivity with grid graphics
7 August 2016, 2:17 UTCCrash course in R a la 2016 with a biological flavour
17 July 2016, 1:45 UTCVectors are enough
28 May 2016, 22:04 UTCSci-Hub over JSTOR
4 November 2015, 23:41 UTCComposable Shiny apps
14 September 2015, 4:39 UTCRecorder technique and divisions in the 16th century
30 July 2015, 0:04 UTCLinear models, a practical introduction in R
7 May 2015, 5:47 UTCWhen I was a young lad in the '90s
21 August 2014, 6:03 UTCFirst-past-the-post voting outcomes tend to surprise the candidates
21 August 2014, 2:59 UTCDates in Google Search aren't trustworthy
27 June 2014, 2:22 UTCReading "Practical Foundations of Mathematics"
18 May 2014, 10:34 UTCCellular automaton tiles revisited

All older entries

Google
Web www.logarithmic.net



[atom feed]  
[æ]