最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

time series - Possible error with transfer entropy calculation in Python - Stack Overflow

programmeradmin0浏览0评论

I want to calculate the transfer entropy between two neurons. I have two time-series vectors, representing the activity of the two neurons over time. I am using transfer entropy to answer the question "how much to does knowledge of neuron 1's activity reduce uncertainty of what neuron 2's activity is going to be at a later time point, compared to just knowing the activity of neuron 2".

I've written a function in python that seems to do the job, however I am getting results which suggest there may be some mistake which makes it seem neuron1 is predicting neuron2, where it should be the other way around.

I'd be super happy if someone could look at my function and see if they spot any errors in my calculation.

Here's a github link if you prefer reading it there:

def transferEntropy_average2(iiv1,iiv2,L=100):
    """The interpretation is: How much does iiv2 predict iiv1"""
    # iiv1 and iiv2 are time vectors which denote neuronal activity as time series
    # The idea is to discretizise them, and use that to calculate probabilities
    # of different activity levels
    
    #I call neuron1 iiv1, and neuron2 iiv2. We're looking for how much neuron2 predics neuron1
    
    #Set some parameters
    N_quantiles = 100
    PY = np.zeros(N_quantiles) #Initialize PY (not used)
    quantiles = np.linspace(0, 100, N_quantiles + 1)[:-1] #Initialize stuff 
    q1 = np.percentile(iiv1, quantiles) # What are the percentiles for discretization 
    q2 = np.percentile(iiv2, quantiles) # And for the other 

    #Allocate memory
    #Joint probability of neuron 2 exhibiting one activity level, and exhibiting it
    # at t-1 before this activity level.
    PY_joint = np.zeros((N_quantiles,N_quantiles,L))
    #PXY is a 2D joint probability matrix just like above, but with the activity
    # of neuron 1 as well, so basically p(Y_t,Y_t-i,X_t-i) doing this for L i's
    PXY = np.zeros((N_quantiles,N_quantiles*N_quantiles,L))
    
    for i in range(L+1,len(iiv1)): # I do this for multiple time shifts (L)
        quantile1 = np.searchsorted(q1, iiv1[i], side='right') - 1 #How active was neuron1 (quantized)
        PY[quantile1] += 1
        for j in range(L):
            quantile2 = np.searchsorted(q1, iiv1[i-(j+1)], side='right') - 1 #How active was neuron1 at t-i
            quantile3 = np.searchsorted(q2, iiv2[i-(j+1)], side='right') - 1 #How active was neuron2 at t-i
            PY_joint[quantile1,quantile2,j] = PY_joint[quantile1,quantile2,j] + 1
            #This mess below is just because I wanted to avoid 3D matrices. It's like a "striped" 2D matrix. 
            #Every column (axis=1) is like "at t-1, neuron1 showed quantile2 amount of activity, while neuron2 showed quantile3 amount of activity"
            PXY[quantile1,quantile2+(quantile3*N_quantiles),j] = PXY[quantile1,quantile2+(quantile3*N_quantiles),j] + 1
    
    
    #then normalize
    PY = PY / np.sum(PY) #Again, not used, but for debugging purposes
    
    transferSum = 0 #Initialize the summation of transfer entropy
    #Transfer entropy for multiple time-shifts can be simplified as the mean of transfer entropy for all shifts
    
    transferTrace = np.zeros(L) # I use this if I want to check at which shift the transfer entropy is the highest
    
    for i in range(L): # For every shift
        PY_joint[:,:,i] = PY_joint[:,:,i] / np.sum(PY_joint[:,:,i]) # normalize joint probability matrix because sum of all p(Y,X) should add up to one
        PXY[:,:,i] = PXY[:,:,i] / np.sum(PXY[:,:,i]) # Same here, normalize the joint probability matrix
        
        #This is the "marginal X" but is actually Y_t-L,X_t-L, basically the probability of all neuronal activities in one past shift, irrespective of what neuron2 at t showed
        pX = np.sum(PXY[:,:,i],axis=0)
        
        #Calculate transfer entropy
        HYIYt = 0 #Conditional entropy of neuron2 H(activity(neuron2)|activity(neuron2 at t-i))
        HYIYt_Xt = 0
        # Marginal probability of shifted Y
        pYt = np.sum(PY_joint[:,:,i], axis=(0)) 
        
        #Now we're all set to calculate conditional entropies.
        #Here, I calculate the conditional entropy of neuron2 conditioned on the activity of neuron2 at every earlier time
        # and the conditional entropy of neuron2 conditioned on the activity of neuron2 and neuron1 at every earlier time
        for n1 in range(N_quantiles):
            for n2 in range(N_quantiles):
                
                if pYt[n2] == 0 or PY_joint[n1,n2,i] == 0: # If p=0, then H=0
                    result = 0
                else:
                    result = PY_joint[n1,n2,i] * np.log2(PY_joint[n1,n2,i] / pYt[n2])
                HYIYt = HYIYt - result
                for n3 in range(N_quantiles):
                    
                    if pX[n2+(n3*N_quantiles)] == 0 or PXY[n1,n2+(n3*N_quantiles),i] == 0:
                        result = 0
                    else:
                        result = (PXY[n1,n2+(n3*N_quantiles),i] * np.log2(PXY[n1,n2+(n3*N_quantiles),i] / pX[n2+(n3*N_quantiles)]))
                    HYIYt_Xt = HYIYt_Xt - result
        
        #Now we have calculated both conditional entropies, and the transfer entropy can be calculated
        # I add these up in "transfer sum" so that we can calculate the mean transfer entropy later
        transferSum = transferSum + (HYIYt - HYIYt_Xt)
        transferTrace[i] = (HYIYt - HYIYt_Xt) # Save the transfer entopy at this time shift
    
    transfer_entropy = transferSum / L
    return transfer_entropy
发布评论

评论列表(0)

  1. 暂无评论