a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,464,691 | <p>If your algorithm is fundamentally sequential, you can't make it fundamentally not that.</p>
<p>What is the algorithm you are using?</p>
<p>EDIT: Googling "diffusion limited aggregation algorithm parallel" lead me <a href="http://arxiv.org/abs/comp-gas/9609001" rel="nofollow noreferrer">here</a>, with the following quote:</p>
<blockquote>
<p>DLA, on the other hand, has been shown
[9,10] to belong to the class of
inherently sequential or, more
formally, P-complete problems.
Therefore, it is unlikely that DLA
clusters can be sampled in parallel in
polylog time when restricted to a
number of processors polynomial in the
system size.</p>
</blockquote>
<p>So the answer to your question is "all signs point to no".</p> | 2010-03-17 18:06:42.500000+00:00 | 2010-03-17 20:33:46.743000+00:00 | 2010-03-17 20:33:46.743000+00:00 | null | 2,464,629 | <p>In C++, I've written a mathematical program (for diffusion limited aggregation) where each new point calculated is dependent on all of the preceding points.
Is it possible to have such a program work in a parallel or distributed manner to increase computing speed?
If so, what type of modifications to the code would I need to look into?</p>
<p>EDIT: My source code is available at...
<a href="http://www.bitbucket.org/damigu78/brownian-motion/downloads/" rel="nofollow noreferrer">http://www.bitbucket.org/damigu78/brownian-motion/downloads/</a>
filename is DLA_full3D.cpp
I don't mind significant re-writes if that's what it would take. After all, I want to learn how to do it.</p> | 2010-03-17 17:59:25.393000+00:00 | 2010-03-17 20:33:46.743000+00:00 | 2010-03-17 20:29:19.257000+00:00 | c++|distributed|parallel-processing | ['http://arxiv.org/abs/comp-gas/9609001'] | 1 |
30,305,036 | <p>Besides the good reference provided above, there is another paper by Yann Le Cunn's group that does text classification just by encoding characters without using any external feature extraction library. It works simply by encoding at the character level. They claim 98% accuracy.</p>
<p><a href="http://arxiv.org/pdf/1502.01710v2.pdf" rel="nofollow">http://arxiv.org/pdf/1502.01710v2.pdf</a></p> | 2015-05-18 13:51:18.020000+00:00 | 2015-05-18 13:51:18.020000+00:00 | null | null | 19,646,123 | <p>I was reading the papers on deep learning. Most of them refer to unsupervised learning.</p>
<p>They also say the neurons are pre-trained using unsupervised RBM network. Later they are fine tuned using Back propagation algorithm (supervised). </p>
<p>So can we solve supervised learning problems using deep learning??</p>
<p>I am trying to find out if deep learning can be applied for document classification problem.
I know there are pretty good classifiers available. But my goal is to find out whether we can use deep learning for this purpose or not.</p> | 2013-10-28 22:19:09.377000+00:00 | 2021-02-26 06:25:34.637000+00:00 | 2016-12-29 21:54:27.530000+00:00 | machine-learning|neural-network|deep-learning|supervised-learning | ['http://arxiv.org/pdf/1502.01710v2.pdf'] | 1 |
55,908,555 | <p>PPO has a built-in mechanism(surrogate clipping objective function) to prevent large gradient updates & generally outperforms A3C on most continuous control environments. </p>
<p>In order for PPO to enjoy the benefits of parallel computing like A3C, Distributed PPO(DPPO) is the way to go.</p>
<p>Check out the links below to find out more information about DPPO.</p>
<p><a href="https://i.stack.imgur.com/AYjQN.png" rel="noreferrer">Pseudo code from the original DeepMind paper</a></p>
<p><a href="https://arxiv.org/pdf/1707.02286.pdf" rel="noreferrer">Original DeepMind paper: Emergence of Locomotion Behaviours in Rich Environments</a></p>
<p>If you plan to implement your DPPO code in Python with Tensorflow, I will suggest you to try <a href="https://ray.readthedocs.io/en/latest/index.html#" rel="noreferrer">Ray</a> for the part on distributed execution.</p> | 2019-04-29 17:57:52.833000+00:00 | 2019-04-29 18:04:17.770000+00:00 | 2019-04-29 18:04:17.770000+00:00 | null | 51,510,460 | <p>Is there any easy way to merge properties of PPO with an A3C method? A3C methods run a number of parrel actors and optimize the parameters. I am trying to merge PPO with A3C.</p> | 2018-07-25 03:40:24.133000+00:00 | 2019-04-29 18:04:17.770000+00:00 | null | reinforcement-learning | ['https://i.stack.imgur.com/AYjQN.png', 'https://arxiv.org/pdf/1707.02286.pdf', 'https://ray.readthedocs.io/en/latest/index.html#'] | 3 |
62,147,916 | <p>You can try one or two things to stabilize the training:</p>
<ol>
<li><p>You can try different batch sizes of 4, 8, 16, 32, 64. You can generate different plots. Have a look at this <a href="https://machinelearningmastery.com/how-to-control-the-speed-and-stability-of-training-neural-networks-with-gradient-descent-batch-size/" rel="nofollow noreferrer">link</a>. It'll generate mini plots for each batch size.</p></li>
<li><p>You can also alter the learning rate. You can apply Learning Rate scheduler or Reduce LR on plateau by directly calling keras callbacks. Alternatively, there is Cyclic LR that try to finds out the optimal learning rate. <a href="https://arxiv.org/pdf/1506.01186.pdf" rel="nofollow noreferrer">paper</a> <a href="https://github.com/bckenstler/CLR" rel="nofollow noreferrer">Github</a></p></li>
</ol> | 2020-06-02 08:31:15.037000+00:00 | 2020-06-02 08:31:15.037000+00:00 | null | null | 62,146,622 | <p>I'm working with a video classification of 5 classes and using TimeDistributed CNN model in Google Colab platform. The training dataset contains 80 videos containing 5 frames each. The validation dataset contains 20 videos containing 5 frames each. The batch size I used is 64. So, in total, I'm working with 100 videos. I compiled the model using Adam optimizer and categorical cross_entropy loss.</p>
<pre><code>model = Sequential()
input_shape=(5, 128, 128, 3)
model.add(TimeDistributed(Conv2D(32, (3, 3), strides=(1, 1),
activation='relu', padding='same'), input_shape=input_shape))
model.add(TimeDistributed(MaxPooling2D((2, 2))))
model.add(TimeDistributed(Conv2D(64, (3, 3), strides=(1, 1),
activation='relu', padding='same')))
model.add(TimeDistributed(Conv2D(128, (3, 3), strides=(1, 1),
activation='relu', padding='same')))
model.add(TimeDistributed(BatchNormalization()))
model.add(TimeDistributed(MaxPooling2D((2, 2))))
model.add(TimeDistributed(Flatten()))
model.add(GRU(64, return_sequences=False))
model.add(BatchNormalization())
model.add((Dense(128, activation='relu')))
model.add(Dense(5, activation='softmax'))
from tensorflow.keras.optimizers import Adam
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=0.0001),
metrics=['accuracy'])
</code></pre>
<p>But, after fitting this model with the dataset, the training accuracy curve is fluctuating like this:</p>
<p><a href="https://i.stack.imgur.com/0qLYb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0qLYb.png" alt="here, 50 epoch is used"></a></p>
<p>Can anyone help me out to understand the reason behind this fluctuation?</p> | 2020-06-02 07:08:06.427000+00:00 | 2020-06-02 08:31:15.037000+00:00 | null | python|tensorflow|machine-learning|keras | ['https://machinelearningmastery.com/how-to-control-the-speed-and-stability-of-training-neural-networks-with-gradient-descent-batch-size/', 'https://arxiv.org/pdf/1506.01186.pdf', 'https://github.com/bckenstler/CLR'] | 3 |
53,684,274 | <p>This is well-known <a href="https://en.wikipedia.org/wiki/Change-making_problem" rel="nofollow noreferrer">Change-making problem</a>, which is (weakly) NP-hard problem.</p>
<p>As for all problems of such kind, there are several algorithms depending on how good solution do you want to find (usually it's <em>easy</em> to find <em>somehow good</em> solution and <em>hard</em> to find the <em>best</em> solution). Feel free to do your own research and select the one that suits your needs best.</p>
<p>As showed in a few other answers on *Overflow/Exchange network like <a href="https://cs.stackexchange.com/questions/6552/when-can-a-greedy-algorithm-solve-the-coin-change-problem">1</a>, <a href="https://stackoverflow.com/questions/6025076/how-to-tell-if-greedy-algorithm-suffices-for-finding-minimum-coin-change/6031625#6031625">2</a>, <a href="https://math.stackexchange.com/questions/25209/looking-to-understand-the-rationale-for-money-denomination">3</a> there is criteria for a given coin system [<a href="http://graal.ens-lyon.fr/~abenoit/algo09/coins2.pdf" rel="nofollow noreferrer">4</a>][<a href="https://arxiv.org/PS_cache/arxiv/pdf/0809/0809.0400v1.pdf" rel="nofollow noreferrer">5</a>] when greedy algorithm is optimal (such systems are called <em>canonical coin systems</em>). For your case if you have 1, 5 and 10-user possible licenses, greedy algorithm <strong>is optimal</strong>.</p>
<p>[<a href="http://graal.ens-lyon.fr/~abenoit/algo09/coins2.pdf" rel="nofollow noreferrer">4</a>] D. Pearson. A Polynomial-time Algorithm for the Change-Making Problem</p>
<p>[<a href="https://arxiv.org/PS_cache/arxiv/pdf/0809/0809.0400v1.pdf" rel="nofollow noreferrer">5</a>] Xuan Cai and Yiyuan Zheng. Canonical Coin Systems for Change-Making
Problems</p> | 2018-12-08 15:59:24.417000+00:00 | 2018-12-09 13:15:42.790000+00:00 | 2018-12-09 13:15:42.790000+00:00 | null | 53,684,050 | <p>I am currently learning Python and my idea would be to convert an existing Excel project into a web application. </p>
<p>As part of the configuration required in the project the user requires licenses for certain features. Example, if there are a total of 17 users that require a feature, the licenses are available for 1 user, 5 users, 10 users, 20 users.</p>
<p>So to cater for the 17 users above I would require:
2 x 1 user
1 x 5 user
1 x 10 user</p>
<p>The configuration consists of over 400 different licenses.</p>
<p>Achieving the above is possible using IF, ELIF and ELSE and returning the remainder and then looping all over again until the remainder is 0.</p>
<p>I am sure there would be a more productive way to go about the above.</p>
<p>Any advise or how to go about wording this better in a search to do more research?</p>
<p>Your assistance is appreciated.</p> | 2018-12-08 15:35:18.727000+00:00 | 2018-12-09 13:15:42.790000+00:00 | null | python | ['https://en.wikipedia.org/wiki/Change-making_problem', 'https://cs.stackexchange.com/questions/6552/when-can-a-greedy-algorithm-solve-the-coin-change-problem', 'https://stackoverflow.com/questions/6025076/how-to-tell-if-greedy-algorithm-suffices-for-finding-minimum-coin-change/6031625#6031625', 'https://math.stackexchange.com/questions/25209/looking-to-understand-the-rationale-for-money-denomination', 'http://graal.ens-lyon.fr/~abenoit/algo09/coins2.pdf', 'https://arxiv.org/PS_cache/arxiv/pdf/0809/0809.0400v1.pdf', 'http://graal.ens-lyon.fr/~abenoit/algo09/coins2.pdf', 'https://arxiv.org/PS_cache/arxiv/pdf/0809/0809.0400v1.pdf'] | 8 |
59,815,576 | <p>There's probably a way to do this by creating a sequence of <em>texture views</em> on the input texture array and output texture array, encoding a <code>MPSImageStatisticsMeanAndVariance</code> kernel invocation for each slice.</p>
<p>But let's take a look at how to do it ourselves. There are many different possible approaches, so I chose one that was simple and used some interesting results from statistics. </p>
<p>Essentially, we'll do the following:</p>
<ol>
<li>Write a kernel that can produce a subset mean and variance for a single row of the image.</li>
<li>Write a kernel that can produce an overall mean and variance from the partial results from step 1.</li>
</ol>
<p>Here are the kernels:</p>
<pre class="lang-cpp prettyprint-override"><code>kernel void compute_row_mean_variance_array(texture2d_array<float, access::read> inTexture [[texture(0)]],
texture2d_array<float, access::write> outTexture [[texture(1)]],
uint3 tpig [[thread_position_in_grid]])
{
uint row = tpig.x;
uint slice = tpig.y;
uint width = inTexture.get_width();
if (row >= inTexture.get_height() || slice >= inTexture.get_array_size()) { return; }
float4 mean(0.0f);
float4 var(0.0f);
for (uint col = 0; col < width; ++col) {
float4 rgba = inTexture.read(ushort2(col, row), slice);
// http://datagenetics.com/blog/november22017/index.html
float weight = 1.0f / (col + 1);
float4 oldMean = mean;
mean = mean + (rgba - mean) * weight;
var = var + (rgba - oldMean) * (rgba - mean);
}
var = var / width;
outTexture.write(mean, ushort2(row, 0), slice);
outTexture.write(var, ushort2(row, 1), slice);
}
kernel void reduce_mean_variance_array(texture2d_array<float, access::read> inTexture [[texture(0)]],
texture2d_array<float, access::write> outTexture [[texture(1)]],
uint3 tpig [[thread_position_in_grid]])
{
uint width = inTexture.get_width();
uint slice = tpig.x;
// https://arxiv.org/pdf/1007.1012.pdf
float4 mean(0.0f);
float4 meanOfVar(0.0f);
float4 varOfMean(0.0f);
for (uint col = 0; col < width; ++col) {
float weight = 1.0f / (col + 1);
float4 oldMean = mean;
float4 submean = inTexture.read(ushort2(col, 0), slice);
mean = mean + (submean - mean) * weight;
float4 subvar = inTexture.read(ushort2(col, 1), slice);
meanOfVar = meanOfVar + (subvar - meanOfVar) * weight;
varOfMean = varOfMean + (submean - oldMean) * (submean - mean);
}
float4 var = meanOfVar + varOfMean / width;
outTexture.write(mean, ushort2(0, 0), slice);
outTexture.write(var, ushort2(1, 0), slice);
}
</code></pre>
<p>In summary, to achieve step 1, we use an "online" (incremental) algorithm to calculate the partial mean/variance of the row in a way that's more numerically-stable than just adding all the pixel values and dividing by the width. My reference for writing this kernel was <a href="http://datagenetics.com/blog/november22017/index.html" rel="nofollow noreferrer">this post</a>. Each thread in the grid writes its row's statistics to the appropriate column and slice of an intermediate texture array.</p>
<p>To achieve step 2, we need to find a statistically-sound way of computing the overall statistics from the partial results. This is quite simple in the case of finding the mean: the mean of the population is the mean of the means of the subsets (this holds when the sample size of each subset is the same; in the general case, the overall mean is a weighted sum of the subset means). The variance is trickier, but it <a href="https://arxiv.org/pdf/1007.1012.pdf" rel="nofollow noreferrer">turns</a> <a href="http://www.burtonsys.com/climate/composite_standard_deviations.html" rel="nofollow noreferrer">out</a> that the variance of the population is the sum of the mean of the variances of the subsets and the variance of the means of the subsets (the same caveat about equally-sized subsets applies here). This is a convenient fact that we can combine with our incremental approach above to produce the final mean and variance of each slice, which is written to the corresponding slice of the output texture.</p>
<p>For completeness, here's the Swift code I used to drive these kernels:</p>
<pre class="lang-swift prettyprint-override"><code>let library = device.makeDefaultLibrary()!
let meanVarKernelFunction = library.makeFunction(name: "compute_row_mean_variance_array")!
let meanVarComputePipelineState = try! device.makeComputePipelineState(function: meanVarKernelFunction)
let reduceKernelFunction = library.makeFunction(name: "reduce_mean_variance_array")!
let reduceComputePipelineState = try! device.makeComputePipelineState(function: reduceKernelFunction)
let width = sourceTexture.width
let height = sourceTexture.height
let arrayLength = sourceTexture.arrayLength
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba32Float, width: width, height: height, mipmapped: false)
textureDescriptor.textureType = .type2DArray
textureDescriptor.arrayLength = arrayLength
textureDescriptor.width = height
textureDescriptor.height = 2
textureDescriptor.usage = [.shaderRead, .shaderWrite]
let partialResultsTexture = device.makeTexture(descriptor: textureDescriptor)!
textureDescriptor.width = 2
textureDescriptor.height = 1
textureDescriptor.usage = .shaderWrite
let destTexture = device.makeTexture(descriptor: textureDescriptor)!
let commandBuffer = commandQueue.makeCommandBuffer()!
let computeCommandEncoder = commandBuffer.makeComputeCommandEncoder()!
computeCommandEncoder.setComputePipelineState(meanVarComputePipelineState)
computeCommandEncoder.setTexture(sourceTexture, index: 0)
computeCommandEncoder.setTexture(partialResultsTexture, index: 1)
let meanVarGridSize = MTLSize(width: sourceTexture.height, height: sourceTexture.arrayLength, depth: 1)
let meanVarThreadgroupSize = MTLSizeMake(meanVarComputePipelineState.threadExecutionWidth, 1, 1)
let meanVarThreadgroupCount = MTLSizeMake((meanVarGridSize.width + meanVarThreadgroupSize.width - 1) / meanVarThreadgroupSize.width,
(meanVarGridSize.height + meanVarThreadgroupSize.height - 1) / meanVarThreadgroupSize.height,
1)
computeCommandEncoder.dispatchThreadgroups(meanVarThreadgroupCount, threadsPerThreadgroup: meanVarThreadgroupSize)
computeCommandEncoder.setComputePipelineState(reduceComputePipelineState)
computeCommandEncoder.setTexture(partialResultsTexture, index: 0)
computeCommandEncoder.setTexture(destTexture, index: 1)
let reduceThreadgroupSize = MTLSizeMake(1, 1, 1)
let reduceThreadgroupCount = MTLSizeMake(arrayLength, 1, 1)
computeCommandEncoder.dispatchThreadgroups(reduceThreadgroupCount, threadsPerThreadgroup: reduceThreadgroupSize)
computeCommandEncoder.endEncoding()
let destTexture2DDesc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba32Float, width: 2, height: 1, mipmapped: false)
destTexture2DDesc.usage = .shaderWrite
let destTexture2D = device.makeTexture(descriptor: destTexture2DDesc)!
meanVarKernel.encode(commandBuffer: commandBuffer, sourceTexture: sourceTexture2D, destinationTexture: destTexture2D)
#if os(macOS)
let blitCommandEncoder = commandBuffer.makeBlitCommandEncoder()!
blitCommandEncoder.synchronize(resource: destTexture)
blitCommandEncoder.synchronize(resource: destTexture2D)
blitCommandEncoder.endEncoding()
#endif
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
</code></pre>
<p>In my experiments, this program produced the same results as <code>MPSImageStatisticsMeanAndVariance</code>, give or take some differences on the order of 1e-7. It was also <strong>2.5x slower</strong> than MPS on my Mac, probably due in part to failure to exploit latency hiding with granular parallelism.</p> | 2020-01-20 00:09:20.200000+00:00 | 2020-05-24 20:55:20.573000+00:00 | 2020-05-24 20:55:20.573000+00:00 | null | 59,809,914 | <p>how can I calculate mean and variance value of an image with 16 channels using Metal ?</p>
<p>I want to calculate mean and variance value of different channel sperately!</p>
<p>ex.:</p>
<pre><code>kernel void meanandvariance(texture2d_array<float, access::read> in[[texture(0)]],
texture2d_array<float, access::write> out[[texture(1)]],
ushort3 gid[[thread_position_in_grid]],
ushort tid[[thread_index_in_threadgroup]],
ushort3 tg_size[[threads_per_threadgroup]]) {
}
</code></pre> | 2020-01-19 12:15:28.963000+00:00 | 2020-05-24 20:55:20.573000+00:00 | 2020-01-21 11:55:24.320000+00:00 | metal|metalkit|metal-performance-shaders | ['http://datagenetics.com/blog/november22017/index.html', 'https://arxiv.org/pdf/1007.1012.pdf', 'http://www.burtonsys.com/climate/composite_standard_deviations.html'] | 3 |
39,686,384 | <p>A few suggestions:</p>
<ol>
<li>Change initialization from <code>gauss</code> to <code>xavier</code>.</li>
<li>Work with <a href="http://arxiv.org/abs/1502.01852" rel="nofollow"><code>"PReLU"</code></a> acitvations, instead of <code>"ReLU"</code>. once your net converges you can finetune to remove them. </li>
<li>Try reducing <code>base_lr</code> by an order of magnitude (or even two orders).</li>
</ol> | 2016-09-25 11:39:37.543000+00:00 | 2016-09-25 11:39:37.543000+00:00 | null | null | 39,663,506 | <p>I've implemented a home-brewed ZFNet (<a href="https://gist.github.com/stoneyang/d9f636ab78ee325db51b2822e71f4011" rel="nofollow">prototxt</a>) for my research. After 20k iterations with the definition, the test accuracy stays at ~0.001 (i.e., 1/1000), the test loss at ~6.9, and training loss at ~6.9, which seems that the net keeps playing guessing games among the 1k classes. I've thoroughly checked the whole definition and tried to change some of the hyper-parameters to start a new training, but of no avail, same results' shown on the screen....</p>
<p>Could anyone show me some light? Thanks in advance! </p>
<hr>
<p>The hyper-parameters in the prototxt are derived from the paper [1]. All the inputs and outputs of the layers seems correct as Fig. 3 in the paper suggests. </p>
<p>The tweaks are: </p>
<ul>
<li><p><code>crop</code>-s of the input for both training and testing are set to <code>225</code> instead of <code>224</code> as discussed in #33;</p></li>
<li><p>one-pixel zero paddings for <code>conv3</code>, <code>conv4</code>, and <code>conv5</code> to make the sizes of the blobs consistent [1]; </p></li>
<li><p>filler types for all learnable layers changed from <code>constant</code> in [1] to <code>gaussian</code> with <code>std: 0.01</code>; </p></li>
<li><p><code>weight_decay</code>: changing from <code>0.0005</code> to <code>0.00025</code> as suggested by @sergeyk in PR #33;</p></li>
</ul>
<p>[1] Zeiler, M. and Fergus, R. Visualizing and Understanding Convolutional Networks, ECCV 2014. </p>
<p>and for the poor part..., I pasted it <a href="http://pastebin.com/hCXBAfwm" rel="nofollow">here</a></p> | 2016-09-23 14:39:27.240000+00:00 | 2016-09-25 11:41:02.100000+00:00 | 2016-09-25 11:41:02.100000+00:00 | machine-learning|computer-vision|deep-learning|caffe|imagenet | ['http://arxiv.org/abs/1502.01852'] | 1 |
49,940,252 | <p>Following are some suggestions regarding your two questions.</p>
<ol>
<li><p>I would recommend an iterative modelling strategy.</p>
<p>Start with</p>
<pre><code>RT ~ 1 + A*B*Groups*Gender + (1+A | Subject ID)
</code></pre>
<p>and see if the problem is tractable. Above model will include both additive effects as well as <em>all</em> interaction terms between <code>A</code>, <code>B</code>, <code>Groups</code> and <code>Gender</code>.</p>
<p>If the problem is not tractable, discard the interaction terms between <code>Gender</code> and the other covariates, and model</p>
<pre><code>RT ~ 1 + A*B*Groups + Gender + (1+A | Subject ID)
</code></pre>
<p>It's difficult to make a statement about potential overfitting without any details on the number of observations.</p></li>
<li><p>Concerning your second question: Generally, I would recommend a Bayesian approach; take a look at the <code>rstan</code>-based <code>brms</code> R package, which allows you to use the same <code>lme4</code>/<code>glmm</code> formula syntax, making it easy to translate models. Model comparison and predictive performance are very broad terms. There exist various ways to explore and compare the predictive performance of these type of nested/hierarchical Bayesian models. See for example the papers by <a href="https://arxiv.org/abs/1503.08650" rel="nofollow noreferrer">Piironi and Vehtari</a> and <a href="https://projecteuclid.org/euclid.ssu/1356628931" rel="nofollow noreferrer">Vehtari and Ojanen</a>.</p></li>
</ol> | 2018-04-20 11:05:40.927000+00:00 | 2018-04-20 11:05:40.927000+00:00 | null | null | 49,939,424 | <p>I am running a linear mixed-effects model in R, and I'm not sure how to include a covariate of no interest in the model, or even how to decide if I should do that.</p>
<p>I have two within-subject variables, let's call them A and B with two levels each, with lots of observations per participant. I'm interested in how their interaction changes across 4 groups. My outcome is reaction time. At the simplest level, I have this model:</p>
<pre><code>RT ~ 1 + A*B*Groups + (1+A | Subject ID)
</code></pre>
<p>I would like to add Gender as a covariate of no interest. I have no theoretical reason to assume it affects anything, but it's really imbalanced across groups, so I'd like to include it. The first part of my question is: What is the best way to do this?</p>
<p>Is it this model:</p>
<pre><code>RT ~ 1 + A*B*Groups + Gender + (1+A | Subject ID)
</code></pre>
<p>or this:</p>
<pre><code>RT ~ 1 + A*B*Groups*Gender + (1+A | Subject ID)
</code></pre>
<p>? Or some other way? My worries about this second model is that it somewhat unreasonably inflates the number of terms in the model. Plus I'm worried about overfitting. </p>
<p>The second part of my question: When selecting the best model, when should I add the covariate to see if it makes any difference at all? Let me explain what I mean.</p>
<p>Let's say I start with the simplest model I mentioned above, but without the slope for A, so this:</p>
<pre><code>RT ~ 1 + A*B*Groups + (1| Subject ID)
</code></pre>
<p>Should I add the covariate first, either as a main effect ( + Gender) or as part of the interaction (*Gender), and <em>then</em> see if adding a slope for A makes a difference (by using the anova() function), or can I go ahead with adding the slope (which is theoretically more important) first, and then see if gender matters at all?</p> | 2018-04-20 10:21:03.613000+00:00 | 2018-04-20 11:05:40.927000+00:00 | null | r|lme4|multilevel-analysis | ['https://arxiv.org/abs/1503.08650', 'https://projecteuclid.org/euclid.ssu/1356628931'] | 2 |
50,700,953 | <p>You can find out the number of (non-)trainable parameters of a model in Keras using the <code>summary</code> function:</p>
<pre class="lang-py prettyprint-override"><code>from keras import models, layers
model = models.Sequential()
model.add(layers.Conv2D(10, (4,4), input_shape=(32, 32, 3)))
model.summary()
</code></pre>
<p>Here is the output:</p>
<pre><code>Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 29, 29, 10) 490
=================================================================
Total params: 490
Trainable params: 490
Non-trainable params: 0
</code></pre>
<p>In general, for a 2D-convolution layer with <code>k</code> filters with size of <code>w*w</code> applied on an input with <code>c</code> channels the number of trainable parameters (considering one bias parameter for each filter, in the default case) is equal to <code>k*w*w*c+k</code> or <code>k*(w*w*c+1)</code>. In the example above we have: <code>k=10, w=4, c=3</code> therefore we have <code>10*(4*4*3+1) = 490</code> trainable parameters. As you can infer, for each channel there are separate weights and they are not shared. Furthermore, the number of parameters of a 2D-convolution layer does not depend on the width or height of the previous layer.</p>
<p><strong>Update:</strong></p>
<p>A convolution layer with depth-wise shared weights: I am not aware of such a layer and could not find a built-in implementation of that in Keras or Tensorflow either. But after thinking about it, you realize that it is essentially equivalent to summing all the channels together and then applying a 2D-convolution on the result. For example in case of a <code>32*32*3</code> image, first all the three channels are summed together resulting in a <code>32*32*1</code> tensor and then a 2D-convolution can be applied on that tensor. Therefore at least one way of achieving a 2D-convolution with shared weights across channels could be like this in Keras (which may or may not be efficient):</p>
<pre class="lang-py prettyprint-override"><code>from keras import models, layers
from keras import backend as K
model = models.Sequential()
model.add(layers.Lambda(lambda x: K.expand_dims(K.sum(x, axis=-1)), input_shape=(32, 32, 3)))
model.add(layers.Conv2D(10, (4,4)))
model.summary()
</code></pre>
<p>Output:</p>
<pre><code>Layer (type) Output Shape Param #
=================================================================
lambda_1 (Lambda) (None, 32, 32, 1) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 29, 29, 10) 170
=================================================================
Total params: 170
Trainable params: 170
Non-trainable params: 0
</code></pre>
<p>One good thing about that Lambda layer is that it could be added in any place (e.g. after the convolution layer). But I think the most important question to ask here is: "Why using a 2D-conv layer with depth-wise shared weighs would be beneficial?" One obvious answer is that the network size (i.e. the total number of trainable parameters) is reduced and therefore there might be a decrease in training time, which I suspect would be negligible. Further, using shared weights across channels implies that the patterns present in different channels are more or less similar. But this is not always the case, for example in RGB images, and therefore by using shared weights across channels I guess you might observe a (noticeable) decrease in network accuracy. So, at least, you should have in mind this trade-off and experiment with it.</p>
<p>However, there is another kind of convolution layer, which you might be interested in, called "Depth-wise Separable Convolution" which has been implemented in Tensorflow, and Keras <a href="https://keras.io/layers/convolutional/#separableconv2d" rel="nofollow noreferrer">supports it</a> as well. The idea is that on each channel a separate 2D-conv filter is applied and afterwards the resulting feature maps are aggregated using <code>k</code> <code>1*1</code> convolutions(<code>k</code> here is the number of output channels). It basically separates the learning of spatial features and depth-wise features. In his paper, <a href="https://arxiv.org/pdf/1610.02357.pdf" rel="nofollow noreferrer">"Xception: Deep Learning with Depthwise Separable Convolutions"</a>, Francois Chollet (the creator of Keras) shows that using depth-wise separable convolutions improves both the performance and accuracy of network. And <a href="https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d" rel="nofollow noreferrer">here</a> you can read more about different kinds of convolution layers used in deep learning.</p> | 2018-06-05 13:11:09.210000+00:00 | 2018-06-06 18:18:15.097000+00:00 | 2018-06-06 18:18:15.097000+00:00 | null | 50,700,307 | <p>I have a little comprehension problem with CNN. And I'm not quite sure how many filters and thus weights are trained.</p>
<p>Example: I have an input layer with the 32x32 pixels and 3 channels (i.e. shape of <code>(32,32,3)</code>). Now I use a 2D-convolution layer with 10 filters of shape <code>(4,4)</code>. So I end up with 10 channels each with shape of <code>(28,28)</code>, but do I now train a separate filter for each input channel or are they shared? Do I train 3x10x4x4 weights or do I train 10x4x4 weights?</p> | 2018-06-05 12:39:54.343000+00:00 | 2018-07-02 10:24:48.137000+00:00 | 2018-07-02 10:24:48.137000+00:00 | python|tensorflow|neural-network|keras|conv-neural-network | ['https://keras.io/layers/convolutional/#separableconv2d', 'https://arxiv.org/pdf/1610.02357.pdf', 'https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d'] | 3 |
44,020,133 | <p>I found the Zhongyu Kuang's code really useful, but I stuck on how to dynamically switch between train and test ops, i.e. how to move from a python boolean is_training to a tensorflow boolean placeholder is_training. I need this functionality to be able to test the network on the validation set during the training.</p>
<p>Starting from his code and inspired by <a href="https://github.com/tensorflow/tensorflow/issues/1122#issuecomment-232535426" rel="nofollow noreferrer">this</a>, I wrote the following code:</p>
<pre><code>def batch_norm(x, scope, is_training, epsilon=0.001, decay=0.99):
"""
Returns a batch normalization layer that automatically switch between train and test phases based on the
tensor is_training
Args:
x: input tensor
scope: scope name
is_training: boolean tensor or variable
epsilon: epsilon parameter - see batch_norm_layer
decay: epsilon parameter - see batch_norm_layer
Returns:
The correct batch normalization layer based on the value of is_training
"""
assert isinstance(is_training, (ops.Tensor, variables.Variable)) and is_training.dtype == tf.bool
return tf.cond(
is_training,
lambda: batch_norm_layer(x=x, scope=scope, epsilon=epsilon, decay=decay, is_training=True, reuse=None),
lambda: batch_norm_layer(x=x, scope=scope, epsilon=epsilon, decay=decay, is_training=False, reuse=True),
)
def batch_norm_layer(x, scope, is_training, epsilon=0.001, decay=0.99, reuse=None):
"""
Performs a batch normalization layer
Args:
x: input tensor
scope: scope name
is_training: python boolean value
epsilon: the variance epsilon - a small float number to avoid dividing by 0
decay: the moving average decay
Returns:
The ops of a batch normalization layer
"""
with tf.variable_scope(scope, reuse=reuse):
shape = x.get_shape().as_list()
# gamma: a trainable scale factor
gamma = tf.get_variable("gamma", shape[-1], initializer=tf.constant_initializer(1.0), trainable=True)
# beta: a trainable shift value
beta = tf.get_variable("beta", shape[-1], initializer=tf.constant_initializer(0.0), trainable=True)
moving_avg = tf.get_variable("moving_avg", shape[-1], initializer=tf.constant_initializer(0.0), trainable=False)
moving_var = tf.get_variable("moving_var", shape[-1], initializer=tf.constant_initializer(1.0), trainable=False)
if is_training:
# tf.nn.moments == Calculate the mean and the variance of the tensor x
avg, var = tf.nn.moments(x, range(len(shape)-1))
update_moving_avg = moving_averages.assign_moving_average(moving_avg, avg, decay)
update_moving_var = moving_averages.assign_moving_average(moving_var, var, decay)
control_inputs = [update_moving_avg, update_moving_var]
else:
avg = moving_avg
var = moving_var
control_inputs = []
with tf.control_dependencies(control_inputs):
output = tf.nn.batch_normalization(x, avg, var, offset=beta, scale=gamma, variance_epsilon=epsilon)
return output
</code></pre>
<p>Then I use the batch_norm layer in this way:</p>
<pre><code>fc1_weights = tf.Variable(...)
fc1 = tf.matmul(x, fc1_weights)
fc1 = batch_norm(fc1, 'fc1_bn', is_training=is_training)
fc1 = tf.nn.relu(fc1)
</code></pre>
<p>Where is_training is a boolean placeholder. Note that the bias addition is not needed because is replaced by the beta parameter as explained in the <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">Batch Normalization paper</a>.</p>
<p>During execution:</p>
<pre><code># Training phase
sess.run(loss, feed_dict={x: bx, y: by, is_training: True})
# Testing phase
sess.run(loss, feed_dict={x: bx, y: by, is_training: False})
</code></pre> | 2017-05-17 08:59:18.273000+00:00 | 2017-05-17 08:59:18.273000+00:00 | null | null | 40,879,967 | <p>I had tried several versions of batch_normalization in tensorflow, but none of them worked! The results were all incorrect when I set batch_size = 1 at inference time.</p>
<p>Version 1: directly use the official version in tensorflow.contrib</p>
<pre><code>from tensorflow.contrib.layers.python.layers.layers import batch_norm
</code></pre>
<p>use like this:</p>
<pre><code>output = lrelu(batch_norm(tf.nn.bias_add(conv, biases), is_training), 0.5, name=scope.name)
</code></pre>
<p>is_training = True at training time and False at inference time.</p>
<p>Version 2: from <a href="https://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow/38320613#38320613">How could I use Batch Normalization in TensorFlow?</a></p>
<pre><code>def batch_norm_layer(x, train_phase, scope_bn='bn'):
bn_train = batch_norm(x, decay=0.999, epsilon=1e-3, center=True, scale=True,
updates_collections=None,
is_training=True,
reuse=None, # is this right?
trainable=True,
scope=scope_bn)
bn_inference = batch_norm(x, decay=0.999, epsilon=1e-3, center=True, scale=True,
updates_collections=None,
is_training=False,
reuse=True, # is this right?
trainable=True,
scope=scope_bn)
z = tf.cond(train_phase, lambda: bn_train, lambda: bn_inference)
return z
</code></pre>
<p>use like this:</p>
<pre><code>output = lrelu(batch_norm_layer(tf.nn.bias_add(conv, biases), is_training), 0.5, name=scope.name)
</code></pre>
<p>is_training is a placeholder at training time is True and False at inference time.</p>
<p>version 3: from slim <a href="https://github.com/tensorflow/models/blob/master/inception/inception/slim/ops.py" rel="noreferrer">https://github.com/tensorflow/models/blob/master/inception/inception/slim/ops.py</a></p>
<pre><code>def batch_norm_layer(inputs,
is_training=True,
scope='bn'):
decay=0.999
epsilon=0.001
inputs_shape = inputs.get_shape()
with tf.variable_scope(scope) as t_scope:
axis = list(range(len(inputs_shape) - 1))
params_shape = inputs_shape[-1:]
# Allocate parameters for the beta and gamma of the normalization.
beta, gamma = None, None
beta = tf.Variable(tf.zeros_initializer(params_shape),
name='beta',
trainable=True)
gamma = tf.Variable(tf.ones_initializer(params_shape),
name='gamma',
trainable=True)
moving_mean = tf.Variable(tf.zeros_initializer(params_shape),
name='moving_mean',
trainable=False)
moving_variance = tf.Variable(tf.ones_initializer(params_shape),
name='moving_variance',
trainable=False)
if is_training:
# Calculate the moments based on the individual batch.
mean, variance = tf.nn.moments(inputs, axis)
update_moving_mean = moving_averages.assign_moving_average(
moving_mean, mean, decay)
update_moving_variance = moving_averages.assign_moving_average(
moving_variance, variance, decay)
else:
# Just use the moving_mean and moving_variance.
mean = moving_mean
variance = moving_variance
# Normalize the activations.
outputs = tf.nn.batch_normalization(
inputs, mean, variance, beta, gamma, epsilon)
outputs.set_shape(inputs.get_shape())
return outputs
</code></pre>
<p>use like this:</p>
<pre><code>output = lrelu(batch_norm_layer(tf.nn.bias_add(conv, biases), is_training), 0.5, name=scope.name)
</code></pre>
<p>is_training = True at training time and False at inference time.</p>
<p>version 4: like version3, but add tf.control_dependencies</p>
<pre><code>def batch_norm_layer(inputs,
decay=0.999,
center=True,
scale=True,
epsilon=0.001,
moving_vars='moving_vars',
activation=None,
is_training=True,
trainable=True,
restore=True,
scope='bn',
reuse=None):
inputs_shape = inputs.get_shape()
with tf.variable_op_scope([inputs], scope, 'BatchNorm', reuse=reuse):
axis = list(range(len(inputs_shape) - 1))
params_shape = inputs_shape[-1:]
# Allocate parameters for the beta and gamma of the normalization.
beta = tf.Variable(tf.zeros(params_shape), name='beta')
gamma = tf.Variable(tf.ones(params_shape), name='gamma')
# Create moving_mean and moving_variance add them to
# GraphKeys.MOVING_AVERAGE_VARIABLES collections.
moving_mean = tf.Variable(tf.zeros(params_shape), name='moving_mean',
trainable=False)
moving_variance = tf.Variable(tf.ones(params_shape), name='moving_variance',
trainable=False)
control_inputs = []
if is_training:
# Calculate the moments based on the individual batch.
mean, variance = tf.nn.moments(inputs, axis)
update_moving_mean = moving_averages.assign_moving_average(
moving_mean, mean, decay)
update_moving_variance = moving_averages.assign_moving_average(
moving_variance, variance, decay)
control_inputs = [update_moving_mean, update_moving_variance]
else:
# Just use the moving_mean and moving_variance.
mean = moving_mean
variance = moving_variance
# Normalize the activations.
with tf.control_dependencies(control_inputs):
return tf.nn.batch_normalization(
inputs, mean, variance, beta, gamma, epsilon)
</code></pre>
<p>use like this:</p>
<pre><code>output = lrelu(batch_norm(tf.nn.bias_add(conv, biases), is_training), 0.5, name=scope.name)
</code></pre>
<p>is_training = True at training time and False at inference time.</p>
<p>The 4 versions of Batch_normalization are all not correct. So, how to use batch normalization correctly?</p>
<p>Another strange phenomenon is if I set batch_norm_layer to null like this, the inference result are all same.</p>
<pre><code>def batch_norm_layer(inputs, is_training):
return inputs
</code></pre> | 2016-11-30 03:50:46.833000+00:00 | 2017-09-22 16:50:17.743000+00:00 | 2017-05-23 12:34:37.587000+00:00 | tensorflow|deep-learning | ['https://github.com/tensorflow/tensorflow/issues/1122#issuecomment-232535426', 'https://arxiv.org/abs/1502.03167'] | 2 |
46,087,210 | <p>Your problem is known to be NP-hard for the case of covering a shape with disks: see f.e. <a href="https://en.wikipedia.org/wiki/Geometric_set_cover_problem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Geometric_set_cover_problem</a>. I strongly suspect your case of the set cover problem is nothing better. So you have to resort to approximately exact algorithms doing the work in linear or polynomial time. Depending on what conditions you can sacrifice in your solution you may arrive at a completely different task with known solution. So, if you explain how did you come to this task and what was the real task you wanted to solve then we may discuss what approximate solution could be good enough for your case.</p>
<p>For example, if you are good with sub-optimal (but good enough) covering of your point set with oriented boxes of sub-optimal size and orientation (but good enough) then you can go with some fast algorithm involving generating epsilon-nets (see f.e. <a href="https://en.wikipedia.org/wiki/%CE%95-net_(computational_geometry)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/%CE%95-net_(computational_geometry)</a> and <a href="https://en.wikipedia.org/wiki/Delone_set" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Delone_set</a>) and/or greedy subdividing the point set into subsets with some greedy approximation of a good enough oriented bounding box for each subset.</p>
<p>Also, I did yet not use it myself in practice but if I had to think about approximate solution of your task knowing your constraints on the solution, I'd think along with <a href="https://arxiv.org/abs/1409.7425" rel="nofollow noreferrer">https://arxiv.org/abs/1409.7425</a> which is supposed to serve as a framework approach for generating approximate solutions of a family of tasks similar to yours. Take a look, may be you see something explicitly useful for you or perhaps you see there useful words to google ready to use solutions.</p> | 2017-09-07 03:07:45.993000+00:00 | 2017-09-07 03:07:45.993000+00:00 | null | null | 46,072,423 | <p>I'm working with C# WPF.</p>
<p>It's a while I'm looking for an algorithm to solve my Problem. Probably it's not so trivial and goes into 3D graphics.</p>
<p>I have a 2D surface in a 3D space (can also be represented by a point cloud).</p>
<p>I need to split up this surface into smaller bits, which should fit into a specific box (for exemple 300 x 300 x 15). </p>
<p>I'm looking for an algorithm that works in 3d which is not axis aligned, something like a minimal volume bounding box but which splits up the volume into smaller boxes if the box is bigger than the specific volume.</p>
<p>I suspect an optimization problem of OBB and a lot of iterations, but I have no idea how to tackle this.</p>
<p>The picture illustrates a bit the Problem. The red and the black boxes are not forced to be axis aligned and they should be < or = to max box size (size and not volume!).</p>
<p><img src="https://i.stack.imgur.com/IXyxr.png" alt="picture"></p>
<p>Thank you all for your support!</p> | 2017-09-06 10:07:04.797000+00:00 | 2017-09-07 03:07:45.993000+00:00 | 2017-09-06 10:15:38.637000+00:00 | c#|wpf|point-clouds|bounding-box|helix-3d-toolkit | ['https://en.wikipedia.org/wiki/Geometric_set_cover_problem', 'https://en.wikipedia.org/wiki/%CE%95-net_(computational_geometry)', 'https://en.wikipedia.org/wiki/Delone_set', 'https://arxiv.org/abs/1409.7425'] | 4 |
37,393,477 | <p>If you're trying to do a graphletCount then the way to do it is within an 'ergm()' command, as follows:</p>
<pre><code>ergm(GY2~graphletCount(n))
</code></pre>
<p>n can be any number from 0 to 29 corresponding to the 30 possible graphlets that exist with up to 5 nodes. It's also possible to define a vector 'c(1,3,5)' to run the graphletCount. Be aware that n is an optional argument and can be left blank, which will run all 30 possible counts.</p>
<p>See the <a href="http://arxiv.org/pdf/1405.7348.pdf" rel="nofollow">paper on ergm.graphlets for examples</a></p> | 2016-05-23 14:16:51.757000+00:00 | 2016-06-29 10:13:05.977000+00:00 | 2016-06-29 10:13:05.977000+00:00 | null | 30,645,085 | <p>I loaded my interaction data, converted to an adjacency matrix and then converted it to an undirected graph.</p>
<p>I installed the packages; <code>ergm</code>, <code>ergm.graphlets</code>, <code>statnet</code>, etc.</p>
<p>When I write:</p>
<pre><code>InitErgmTerm.graphletCount(GY2)
</code></pre>
<p>I get the following error message:</p>
<blockquote>
<p>Error: is.directed(nw) : is.directed requires an argument of <strong>class
network</strong>.</p>
</blockquote>
<p>I couldn't find a solution in the tutorial, and I would really appreciate if someone clarifies this.</p> | 2015-06-04 13:26:35.677000+00:00 | 2016-06-29 10:13:05.977000+00:00 | 2015-06-04 13:42:06.080000+00:00 | r | ['http://arxiv.org/pdf/1405.7348.pdf'] | 1 |
48,069,863 | <p>It's seems possible to make the brute force approach run fairly quickly.</p>
<p>If you preprocess each sequence into a balanced tree where each node is augmented with the min of that subtree, then you can find the min of any subrange of that sequence in O(log n) time by splitting the tree at the appropriate points. See, for example, <a href="https://arxiv.org/abs/1612.05665" rel="nofollow noreferrer">this paper</a> for more information. Note that this preprocessing takes O(n) time.</p>
<p>Let's call the range (i,j) a <em>window</em>. The complexity of this problem doesn't depend on the specific (i,j), but rather the size of the window (that is, j-i+1). For a window size of m (=j-i+1), there are n-m+1 windows of that size. For each window, there are m+1 places where you can "cut" the window so that some prefix of the elements come from from sequence <code>a</code> and the suffix comes from sequence <code>b</code>. You pay O(log n) for each cut (to split the binary trees as I mentioned above). That's a total cost of <strong>O((n-m+1) * (m+1) * log(n)).</strong></p>
<p>There is probably a faster way to do this, by reusing splits, or by noticing that nearby windows share a lot of elements. But regardless, I think the binary tree splitting trick I mentioned above might be helpful!</p> | 2018-01-03 00:57:58.347000+00:00 | 2018-01-03 01:08:40.990000+00:00 | 2018-01-03 01:08:40.990000+00:00 | null | 48,069,319 | <p>Let's say we have two arrays of ints of equal length, <code>a_1, ..., a_n</code> and <code>b_1, ..., b_n</code>. For any given index pairs <code>i</code> and <code>j</code> with <code>1<=i<j<=n</code>, we need to find the max of the min for any sequence of the form <code>a_k, ..., a_{l-1}, b_l, ..., b_{j-i+k}</code> with <code>0<=k<=n-j+i</code> and <code>l</code> can be <code>j-i+k+1</code>, i.e. that sequence is purely from array <code>a</code>. When <code>k=0</code>, the sequence is purely from array <code>b</code>.</p>
<p>We want to do this for all pairs of <code>i</code> and <code>j</code> very efficiently.</p>
<p>Example, given </p>
<pre><code>`a=[3,2,4,1]` and `b=[4,6,1,3]`
when `i=1, j=3`, the sequence can be
`[3,2,4]`, min is 2
`[3,2,1]`, min is 1
`[3,6,1]`, min is 1
`[2,4,1]`, min is 1
`[2,4,3]`, min is 2
`[2,1,3]`, min is 1
`[4,6,1]`, min is 1
`[6,1,3]`, min is 1
</code></pre>
<p>So the max is 2 for this input.</p>
<p>Is there a good way to run this efficiently?</p> | 2018-01-02 23:37:23.127000+00:00 | 2018-01-03 01:08:40.990000+00:00 | 2018-01-02 23:53:07.787000+00:00 | algorithm | ['https://arxiv.org/abs/1612.05665'] | 1 |
51,250,513 | <p>The trend these days is to perform end-to-end learning rather than having a model learn some abstract representation and then feed this representation to some other model (e.g. SVM).</p>
<p>The intuition behind this trend is the following: if you optimise a model <code>A</code> on some subtask <code>S1</code> and a model <code>B</code> on another subtask <code>S2</code>, both models will converge to some local optimum solution. The combination of the two local optima is expected to be suboptimal compared to an optimum that would have been obtained by optimising on the <em>full</em> task <code>S = (S1 + S2)</code>. When optimising a model end-to-end, you can adjust all the parts of your model together to better solve the task. However, when you split your model and train separately its different parts, you <em>break</em> the direct <em>signal</em> between the parts and make it harder to improve the output of model <code>A</code> for the specific task of improving the results from model <code>B</code> as you do not have a direct way to optimise the two models alongside.</p>
<p>What you're suggesting was quite popular in the past. For instance, the original <a href="https://arxiv.org/abs/1311.2524" rel="nofollow noreferrer">RCNN</a> paper by Girshick was using a pretrained convolutional neural network to extract features that were then fed to an SVM for classification.</p>
<p>However, this approach was abandoned in the following iteration of R-CNN, <a href="https://arxiv.org/pdf/1504.08083.pdf" rel="nofollow noreferrer">Fast RCNN</a>, the SVM step being replaced by a softmax. In table 8 section 5.4 of <em>Fast R-CNN</em>, the authors compare the same model with SVM vs. softmax and they come to the conclusion that softmax slightly outperforms the SVM version.</p> | 2018-07-09 16:53:27.497000+00:00 | 2018-07-09 18:38:59.740000+00:00 | 2018-07-09 18:38:59.740000+00:00 | null | 51,248,920 | <p>The common way to do "transfer-learning", or "retraining" on inception model is to take the bottleneck layer from the model, squeezing the bottleneck tensor as a flat 2048 neuron layer, then add a final layer with the number neurons matching the number of categories to classify (and eventually the softmax).</p>
<p>My question is: instead training this bottleneck layer as a neuron-network, why not feed these highly abstracted 2048 features to an SVM, which probably could achieve a better result ?</p>
<p>Many thanks!</p> | 2018-07-09 15:20:48.660000+00:00 | 2018-07-10 06:24:54.760000+00:00 | 2018-07-10 06:24:54.760000+00:00 | machine-learning|deep-learning|svm | ['https://arxiv.org/abs/1311.2524', 'https://arxiv.org/pdf/1504.08083.pdf'] | 2 |
71,957,898 | <blockquote>
<p>It seems to run about twice as fast as the native sort function in golang</p>
</blockquote>
<p>Note that the native sort for slice will evolve with Go 1.19 (Q4 2022).<br />
See:</p>
<ul>
<li><a href="https://github.com/golang/go/issues/50154" rel="nofollow noreferrer">issue 50154</a>,</li>
<li><a href="https://go-review.googlesource.com/c/exp/+/399315/" rel="nofollow noreferrer">CL 399315</a>,</li>
<li><a href="https://github.com/golang/go/commit/72e77a7f41bbf45d466119444307fd3ae996e257" rel="nofollow noreferrer">commit 72e77a7</a> by <a href="https://github.com/zhangyunhao116" rel="nofollow noreferrer">ZhangYunHao</a>.</li>
</ul>
<blockquote>
<h2>sort: use pdqsort</h2>
<ul>
<li>Across all benchmarks, pdqsort is never significantly slower than the previous algorithm.</li>
<li>In common patterns, pdqsort is often faster (i.e. 10x faster in sorted slices).</li>
</ul>
<p>The <code>pdqsort</code> is described at <a href="https://arxiv.org/pdf/2106.05123.pdf" rel="nofollow noreferrer">Pattern-defeating Quicksort (pdf)</a> by <strong><a href="https://github.com/orlp" rel="nofollow noreferrer">Orson R. L. Peters</a></strong>.</p>
<p>(extract)<br />
Pattern-defeating
quicksort is often the best choice of algorithm overall for small to medium input sizes or data type sizes.<br />
It and other quicksort variants suffer from datasets that
are too large to fit in cache, where is4o shines.<br />
The latter algorithm however suffers from bad performance on smaller sizes, future research could perhaps combine the best of these two algorithms</p>
<p>This CL is inspired by both C++ implementation and Rust implementation.</p>
<ul>
<li><a href="https://github.com/orlp/pdqsort" rel="nofollow noreferrer">C++ implementation</a></li>
<li><a href="https://docs.rs/pdqsort/latest/pdqsort/" rel="nofollow noreferrer">Rust implementation</a></li>
</ul>
</blockquote> | 2022-04-21 16:32:12.993000+00:00 | 2022-04-21 16:47:05.367000+00:00 | 2022-04-21 16:47:05.367000+00:00 | null | 23,276,417 | <p>I was just playing around with sorting in golang and I found a qsort function on stackoverflow. It seems to run about twice as fast as the native sort function in golang. I've tried it with different input sizes and tested that it works.</p>
<p>Could anyone explain why this happens?</p>
<p>Here is the code you can test it on your pc:</p>
<pre><code>package main
import (
"fmt"
"math/rand"
"sort"
"time"
)
func qsort(a []int) []int {
if len(a) < 2 {
return a
}
left, right := 0, len(a)-1
// Pick a pivot
pivotIndex := rand.Int() % len(a)
// Move the pivot to the right
a[pivotIndex], a[right] = a[right], a[pivotIndex]
// Pile elements smaller than the pivot on the left
for i := range a {
if a[i] < a[right] {
a[i], a[left] = a[left], a[i]
left++
}
}
// Place the pivot after the last smaller element
a[left], a[right] = a[right], a[left]
// Go down the rabbit hole
qsort(a[:left])
qsort(a[left+1:])
return a
}
func main() {
// Create an array with random integers
rand.Seed(30)
size := 1000000
array1 := make([]int, size)
start := time.Now()
for i, _ := range array1 {
array1[i] = rand.Int()
}
fmt.Println("Creating array with ", size, " elements...")
fmt.Println("--- ", time.Since(start), " ---")
// Create a copy of the unsorted array
array2 := make([]int, size)
copy(array2, array1)
// Short using native function
start = time.Now()
sort.Ints(array1)
fmt.Println("Sorting with the native sort...")
fmt.Println("--- ", time.Since(start), " ---")
// Sort using custom qsort
start = time.Now()
qsort(array2)
fmt.Println("Sorting with custom qsort...")
fmt.Println("--- ", time.Since(start), " ---")
}
</code></pre> | 2014-04-24 17:59:39.013000+00:00 | 2022-04-21 16:47:05.367000+00:00 | null | sorting|go|native|qsort | ['https://github.com/golang/go/issues/50154', 'https://go-review.googlesource.com/c/exp/+/399315/', 'https://github.com/golang/go/commit/72e77a7f41bbf45d466119444307fd3ae996e257', 'https://github.com/zhangyunhao116', 'https://arxiv.org/pdf/2106.05123.pdf', 'https://github.com/orlp', 'https://github.com/orlp/pdqsort', 'https://docs.rs/pdqsort/latest/pdqsort/'] | 8 |
38,173,987 | <p>As stated in his comment, The OP is actually looking for all the pairs with edit distance of less than 2.</p>
<p>Given an input of n words, a naive approach would be to make n(n-1)/2 comparisons, but less comparison may be required when L in an edit distance which is a <a href="https://en.wikipedia.org/wiki/Edit_distance#Properties" rel="nofollow">metric space for strings</a>. </p>
<p>Levenshtein distance is a metric space, and satisfies the 4 required metric axioms - including the triangle inequality.</p>
<p><strong>Edit:</strong></p>
<p>Given this, we can use the method proposed by Sergey Brin (Google's co-founder) in his paper <a href="http://www.vldb.org/conf/1995/P574.PDF" rel="nofollow">Near Neighbor Search in Large Metric Spaces</a> back in 1995, to solve our problem.</p>
<p>Quoting from the paper: Given a metric space (X, d), a data set Y ⊆ X, a query point x ∈ X, and a range r ∈ R, the near neighbors of x are the set of points y ∈ Y, such that d(x, y) ≤ r.</p>
<p>In this paper, Brin introduced GNAT (Geometric Near-neighbor Access Tree) - a data structure to solve this problem. Brin actually test the performance of his algorithm using the Levenshtein distance (which he calls "Edit distance") against two text corpora.</p>
<p>Over the years GNAT become well-known and widely used. Some improvements to GNAT where suggested in <a href="https://arxiv.org/pdf/1605.05944v1.pdf" rel="nofollow">Geometric Near-neighbor Access Tree (GNAT) revisited</a> - Fredriksson 2016.</p> | 2016-07-03 20:21:19.570000+00:00 | 2016-07-04 20:03:19.660000+00:00 | 2016-07-04 20:03:19.660000+00:00 | null | 38,169,332 | <p>I have a large array of words (300k words) and I want to find the edit distance between each word, so I was just iterating over it and doing running through this version of the levenstein algorithm:</p>
<pre><code>unsigned int edit_distance(const std::string& s1, const std::string& s2)
{
const std::size_t len1 = s1.size(), len2 = s2.size();
std::vector<std::vector<unsigned int>> d(len1 + 1, std::vector<unsigned int>(len2 + 1));
d[0][0] = 0;
for (unsigned int i = 1; i <= len1; ++i) d[i][0] = i;
for (unsigned int i = 1; i <= len2; ++i) d[0][i] = i;
for (unsigned int i = 1; i <= len1; ++i)
for (unsigned int j = 1; j <= len2; ++j)
// note that std::min({arg1, arg2, arg3}) works only in C++11,
// for C++98 use std::min(std::min(arg1, arg2), arg3)
d[i][j] = std::min({ d[i - 1][j] + 1, d[i][j - 1] + 1, d[i - 1][j - 1] + (s1[i - 1] == s2[j - 1] ? 0 : 1) });
return d[len1][len2];
}
</code></pre>
<p>So what I was wondering is, if there was a more efficient way of doing this, I heard about Levenshtein Autonoma but I wasn't sure if that would be any more efficient. </p>
<p>I would imagine that there you could avoid processing the same thing over and over again by preprocessing something but I have no idea how to actually achieve it (some approximate calculations would be to preprocess everything would be around 10^28 operations so that would not be an improvement)</p> | 2016-07-03 11:09:09.267000+00:00 | 2016-07-04 20:03:19.660000+00:00 | null | c++|algorithm | ['https://en.wikipedia.org/wiki/Edit_distance#Properties', 'http://www.vldb.org/conf/1995/P574.PDF', 'https://arxiv.org/pdf/1605.05944v1.pdf'] | 3 |
25,248,693 | <p>Take a look at L-FLAT, a Logtalk Toolkit for Formal Languages and Automata Theory. You can run it most Prolog compilers:</p>
<p><a href="https://code.google.com/p/lflat/" rel="nofollow">https://code.google.com/p/lflat/</a></p>
<p>It includes several examples of defining and operating on automata. There's also a paper about the system that you can download from:</p>
<p><a href="http://arxiv.org/abs/1112.3783" rel="nofollow">http://arxiv.org/abs/1112.3783</a></p> | 2014-08-11 17:09:46.860000+00:00 | 2014-08-11 17:09:46.860000+00:00 | null | null | 25,193,410 | <p>I'm studying Formal languages and compilers, and I'm doing a little bit hard to understand everything.
Is there a tool that allows you to create automata and grammars and perform operations on them?
Operations such as: minimize an automaton by automaton grammar, from grammar to automaton, make an epsilon-free grammar, etc. </p>
<p>Thank you very much</p> | 2014-08-07 22:40:30.633000+00:00 | 2014-08-11 17:09:46.860000+00:00 | null | parsing|grammar|finite-automata|automata | ['https://code.google.com/p/lflat/', 'http://arxiv.org/abs/1112.3783'] | 2 |
44,393,143 | <p>There are a couple of works that deal with this problem. Because you make the training set harder the training error will be lower, however your generalization might be better. It has been shown that adding noise can have stability effects for training Generative Adversarial Networks (<a href="https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_35.pdf?attachauth=ANoY7cobNEKmoWjNBHmgxeMfiyFr0oIuVnlRxknz7M6HSru8m0ELxGAm-mZRawxMB8QAn2TbRt1bnJpXLtTXJUgh8qcikaVHC3LC3B60GkJk-lJMC-amVmxOlUJmeTf3KfIqdcPvLjSCDbAnq2WzBOOl-GxICiFru007Q8nLhvjlffuqfNv5AAIapHMEz71oQ9gvQbrAcWnb_KS47ZAofSYXyyorCGgyE5DtrhDwLYXDidGR4i2ZOOc%3D&attredirects=0" rel="nofollow noreferrer">Adversarial Training</a>).</p>
<p>For classification tasks it is not that cut and dry. Not many works have actually dealt with this topic. The closest one is to my best knowledge is this one from google (<a href="https://arxiv.org/pdf/1412.6572.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1412.6572.pdf</a>), where they show the limitation of using training without noise. They do report a regularization effect, but not actual better results than using other methods.</p> | 2017-06-06 14:45:09.047000+00:00 | 2017-06-06 14:45:09.047000+00:00 | null | null | 44,392,474 | <p>I've been thinking that adding noise to an image can prevent overfitting and also "increase" the dataset by adding variations to it. I'm only trying to add some random 1s to images that has shape (256,256,3) which uses uint8 to represent its color. I don't think that can affect the visualization at all (I showed both images with matplotlib and they seems almost the same) and has only ~0.01 mean difference in the sum of their values. </p>
<p>But it doesn't look to have its advances. After training for a long time it's still not as good as the one doesn't use noises.</p>
<p>Has anyone tried to use noise for image classification tasks like this? Is it eventually better?</p> | 2017-06-06 14:15:21.503000+00:00 | 2017-06-07 11:13:01.910000+00:00 | null | deep-learning | ['https://c4209155-a-62cb3a1a-s-sites.googlegroups.com/site/nips2016adversarial/WAT16_paper_35.pdf?attachauth=ANoY7cobNEKmoWjNBHmgxeMfiyFr0oIuVnlRxknz7M6HSru8m0ELxGAm-mZRawxMB8QAn2TbRt1bnJpXLtTXJUgh8qcikaVHC3LC3B60GkJk-lJMC-amVmxOlUJmeTf3KfIqdcPvLjSCDbAnq2WzBOOl-GxICiFru007Q8nLhvjlffuqfNv5AAIapHMEz71oQ9gvQbrAcWnb_KS47ZAofSYXyyorCGgyE5DtrhDwLYXDidGR4i2ZOOc%3D&attredirects=0', 'https://arxiv.org/pdf/1412.6572.pdf'] | 2 |
44,408,008 | <p>I wouldn't go to add noise to your data. Some papers employ input deformations during training to increase robutness and convergence speed of models. However, these deformations are statistically inefficient (not just on image but any kind of data).
You can read <a href="https://arxiv.org/pdf/1312.6199v1.pdf" rel="nofollow noreferrer">Intriguing properties of Neural Networks</a> from Szegedy et <em>al.</em> for more details (and refer to references 9 & 13 for papers that uses deformations).</p>
<p>If you want to avoid overfitting, you might be interested to read about <a href="http://www.deeplearningbook.org/contents/regularization.html" rel="nofollow noreferrer">regularization</a> instead.</p> | 2017-06-07 08:57:05.303000+00:00 | 2017-06-07 11:13:01.910000+00:00 | 2017-06-07 11:13:01.910000+00:00 | null | 44,392,474 | <p>I've been thinking that adding noise to an image can prevent overfitting and also "increase" the dataset by adding variations to it. I'm only trying to add some random 1s to images that has shape (256,256,3) which uses uint8 to represent its color. I don't think that can affect the visualization at all (I showed both images with matplotlib and they seems almost the same) and has only ~0.01 mean difference in the sum of their values. </p>
<p>But it doesn't look to have its advances. After training for a long time it's still not as good as the one doesn't use noises.</p>
<p>Has anyone tried to use noise for image classification tasks like this? Is it eventually better?</p> | 2017-06-06 14:15:21.503000+00:00 | 2017-06-07 11:13:01.910000+00:00 | null | deep-learning | ['https://arxiv.org/pdf/1312.6199v1.pdf', 'http://www.deeplearningbook.org/contents/regularization.html'] | 2 |
60,443,339 | <p>Changing from tf1 to tf2 I was exposed to the same question and after a little bit of experimenting I found the solution below that shows how to establish the interface between a function decorated with tf.function and a scipy optimizer. The important changes compared to the question are:</p>
<ol>
<li>As mentioned by Ives scipy's lbfgs
needs to get function value and gradient, so you need to provide a function that delivers both and then set <code>jac=True</code></li>
<li>scipy's lbfgs is a Fortran function that expects the interface to provide np.float64 arrays while tensorflow tf.function uses tf.float32.
So one has to cast input and output.</li>
</ol>
<p>I provide an example of how this can be done for a toy problem here below. </p>
<pre><code>import tensorflow as tf
import numpy as np
import scipy.optimize as sopt
def model(x):
return tf.reduce_sum(tf.square(x-tf.constant(2, dtype=tf.float32)))
@tf.function
def val_and_grad(x):
with tf.GradientTape() as tape:
tape.watch(x)
loss = model(x)
grad = tape.gradient(loss, x)
return loss, grad
def func(x):
return [vv.numpy().astype(np.float64) for vv in val_and_grad(tf.constant(x, dtype=tf.float32))]
resdd= sopt.minimize(fun=func, x0=np.ones(5),
jac=True, method='L-BFGS-B')
print("info:\n",resdd)
</code></pre>
<p>displays</p>
<pre><code>info:
fun: 7.105427357601002e-14
hess_inv: <5x5 LbfgsInvHessProduct with dtype=float64>
jac: array([-2.38418579e-07, -2.38418579e-07, -2.38418579e-07, -2.38418579e-07,
-2.38418579e-07])
message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 3
nit: 2
status: 0
success: True
x: array([1.99999988, 1.99999988, 1.99999988, 1.99999988, 1.99999988])
</code></pre>
<h1>Benchmark</h1>
<p>For comparing speed
I use
the lbfgs optimizer for a style transfer
problem (see <a href="https://arxiv.org/pdf/1910.09497.pdf" rel="noreferrer">here</a> for the network). Note, that for this problem the network parameters are fixed and the input signal is adapted. As the optimized parameters (the input signal) are 1D the function factory is not needed. </p>
<p>I compare four implementations</p>
<ol>
<li>TF1.12: TF1 with with ScipyOptimizerInterface </li>
<li>TF2.0 (E): the approach above without using tf.function decorators</li>
<li>TF2.0 (G): the approach above using tf.function decorators</li>
<li>TF2.0/TFP: using the lbfgs minimizer from
<a href="https://www.tensorflow.org/probability/api_docs/python/tfp/optimizer/lbfgs_minimize" rel="noreferrer">tensorflow_probability</a></li>
</ol>
<p>For this comparison the optimization is stopped after 300 iterations (generally for convergence the problem requires 3000 iterations)</p>
<h2>Results</h2>
<pre><code>Method runtime(300it) final loss
TF1.12 240s 0.045 (baseline)
TF2.0 (E) 299s 0.045
TF2.0 (G) 233s 0.045
TF2.0/TFP 226s 0.053
</code></pre>
<p>The TF2.0 eager mode (TF2.0(E)) works correctly but is about 20% slower than the TF1.12 baseline version. TF2.0(G) with tf.function works fine and is marginally faster than TF1.12, which is a good thing to know.</p>
<p>The optimizer from tensorflow_probability (TF2.0/TFP) is slightly faster than TF2.0(G) using scipy's lbfgs but does not achieve the same error reduction. In fact the decrease of the loss over time is not monotonous which seems a bad sign. Comparing the two implementations of lbfgs (scipy and tensorflow_probability=TFP) it is clear that the Fortran code in scipy is significantly more complex.
So either the simplification of the algorithm in TFP is harming here or even the fact that TFP is performing all calculations in float32 may also be a problem.</p> | 2020-02-27 23:41:47.980000+00:00 | 2020-02-28 10:22:28.003000+00:00 | 2020-02-28 10:22:28.003000+00:00 | null | 59,029,854 | <p>After the introduction of Tensorflow 2.0 the scipy interface (tf.contrib.opt.ScipyOptimizerInterface) has been removed. However, I would still like to use the scipy optimizer <strong>scipy.optimize.minimize(method=’L-BFGS-B’)</strong> to train a neural network (<strong>keras model sequential</strong>). In order for the optimizer to work, it requires as input a function <strong>fun(x0)</strong> with <strong>x0</strong> being an array of shape (n,). Therefore, the first step would be to "flatten" the weights matrices to obtain a vector with the required shape. To this end, I modified the code provided by <a href="https://pychao.com/2019/11/02/optimize-tensorflow-keras-models-with-l-bfgs-from-tensorflow-probability/" rel="nofollow noreferrer">https://pychao.com/2019/11/02/optimize-tensorflow-keras-models-with-l-bfgs-from-tensorflow-probability/</a>. This provides a function factory meant to create such a function <strong>fun(x0)</strong>. However, the code does not seem to work and the loss function does not decrease. I would be really grateful if someone could help me work this out.</p>
<p>Here the piece of code I am using:</p>
<pre><code>func = function_factory(model, loss_function, x_u_train, u_train)
# convert initial model parameters to a 1D tf.Tensor
init_params = tf.dynamic_stitch(func.idx, model.trainable_variables)
init_params = tf.cast(init_params, dtype=tf.float32)
# train the model with L-BFGS solver
results = scipy.optimize.minimize(fun=func, x0=init_params, method='L-BFGS-B')
def loss_function(x_u_train, u_train, network):
u_pred = tf.cast(network(x_u_train), dtype=tf.float32)
loss_value = tf.reduce_mean(tf.square(u_train - u_pred))
return tf.cast(loss_value, dtype=tf.float32)
def function_factory(model, loss_f, x_u_train, u_train):
"""A factory to create a function required by tfp.optimizer.lbfgs_minimize.
Args:
model [in]: an instance of `tf.keras.Model` or its subclasses.
loss [in]: a function with signature loss_value = loss(pred_y, true_y).
train_x [in]: the input part of training data.
train_y [in]: the output part of training data.
Returns:
A function that has a signature of:
loss_value, gradients = f(model_parameters).
"""
# obtain the shapes of all trainable parameters in the model
shapes = tf.shape_n(model.trainable_variables)
n_tensors = len(shapes)
# we'll use tf.dynamic_stitch and tf.dynamic_partition later, so we need to
# prepare required information first
count = 0
idx = [] # stitch indices
part = [] # partition indices
for i, shape in enumerate(shapes):
n = np.product(shape)
idx.append(tf.reshape(tf.range(count, count+n, dtype=tf.int32), shape))
part.extend([i]*n)
count += n
part = tf.constant(part)
def assign_new_model_parameters(params_1d):
"""A function updating the model's parameters with a 1D tf.Tensor.
Args:
params_1d [in]: a 1D tf.Tensor representing the model's trainable parameters.
"""
params = tf.dynamic_partition(params_1d, part, n_tensors)
for i, (shape, param) in enumerate(zip(shapes, params)):
model.trainable_variables[i].assign(tf.cast(tf.reshape(param, shape), dtype=tf.float32))
# now create a function that will be returned by this factory
def f(params_1d):
"""
This function is created by function_factory.
Args:
params_1d [in]: a 1D tf.Tensor.
Returns:
A scalar loss.
"""
# update the parameters in the model
assign_new_model_parameters(params_1d)
# calculate the loss
loss_value = loss_f(x_u_train, u_train, model)
# print out iteration & loss
f.iter.assign_add(1)
tf.print("Iter:", f.iter, "loss:", loss_value)
return loss_value
# store these information as members so we can use them outside the scope
f.iter = tf.Variable(0)
f.idx = idx
f.part = part
f.shapes = shapes
f.assign_new_model_parameters = assign_new_model_parameters
return f
</code></pre>
<p>Here <strong>model</strong> is an object tf.keras.Sequential.</p>
<p>Thank you in advance for any help!</p> | 2019-11-25 10:37:27.193000+00:00 | 2022-09-08 18:14:04.370000+00:00 | null | python|tensorflow|keras|neural-network|scipy-optimize | ['https://arxiv.org/pdf/1910.09497.pdf', 'https://www.tensorflow.org/probability/api_docs/python/tfp/optimizer/lbfgs_minimize'] | 2 |
63,256,464 | <p>I'm finding it quite hard to sum this up quickly. But here it goes!</p>
<h2>Models in DCGANS</h2>
<p>The DCGAN archetype by <a href="https://arxiv.org/abs/1511.06434" rel="nofollow noreferrer">Radford et al. 2015</a> uses two convolutional neural networks as the discriminator and generator models.</p>
<p>The discriminator develops a hierarchical structure of information that is distilled from images using convolutions. The results of convolutions are called <em>feature maps</em>. Deeper into the model, the convolutions derive feature maps that represent more abstract structures in the images.</p>
<p>This idea is essential to the function of the DCGAN, and is the reason you see the pattern in those numbers. Take a look at the discriminator model in <a href="https://www.tensorflow.org/tutorials/generative/dcgan" rel="nofollow noreferrer">this example</a> to see this in action.</p>
<h2>Your model</h2>
<p>In the DCGAN example I have just linked, the generator is very similar to your model. In a simplistic sense, a DCGAN generator creates samples from a lower-dimensional space of representations, known as the <em>latent space</em>.</p>
<p>You can be see how this model is almost the reverse of a discriminator: the image is 'built-up' by transposed convolutions (again, put simply, transposed convolutions are convolutions for building images up) from a lower-dimensional tensor - a latent space vector.</p>
<h2>The number 256</h2>
<p>The number 256 you are asking about corresponds to the number of feature maps that are stored by the generator and this number decreases through the model as</p>
<pre><code>(7,7,256)
(7,7,128)
(14,14,64)
(28,28,1)
</code></pre>
<p>Think of this as 256 7 by 7 feature maps, then 128 7 by 7 feature maps etc. The <code>strides</code> parameter of the <code>Conv2DTranspose</code> layer is critical. If this number is not equal to one, the image output of this layer is not the same size as before. The last shape listed is this way because it is a sample created by the generator, not a collection of feature maps, hence the number 1.</p>
<p><a href="https://stats.stackexchange.com/questions/360899/difference-between-strided-and-non-strided-convolution">See here</a> for a good explanation on strides in convolutions.</p>
<h2>Re-writing for different images</h2>
<p>If you are going to use this model for different size images, you need to up-size some initial small tensor to the correct image dimensions. The values of the stride parameter in the layers of your model show you how the numbers I listed above relate to each other.</p>
<p>The <code>padding</code> parameter of the <code>Conv2DTranspose</code> layer is critical too. Setting it as <code>same</code> means the image has a convolution kernel for each pixel in the image (<a href="https://arxiv.org/pdf/1603.07285.pdf" rel="nofollow noreferrer">see here</a> for some excellent diagrams relating to this).</p>
<p>In the generator model it is very common to keep <code>padding=same</code> for every convolution. Essentially this means the image stays the same size from input to output of that layer.</p>
<p>I hope this is some helpful information. In my own experience testing small deviations from a tried and tested model like this works well. The <code>assert</code> statements will help you make sure each layer works correctly.</p> | 2020-08-04 23:37:44.680000+00:00 | 2020-08-04 23:48:12.297000+00:00 | 2020-08-04 23:48:12.297000+00:00 | null | 63,253,714 | <p>I really have tried to do my due diligence here but I can't find a lot of documentation on why certain numbers are chosen. I'm also fairly hazy on how convolutions work in generators (have a better understanding in terms of classifiers) so that's not helping my case. I think my question should be pretty simple to address for some more experiences folks out there though.</p>
<p>Take Google's tutorial for example, the Generator class:</p>
<pre><code>def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
</code></pre>
<p>Where is 7x7x256 coming from? I understand that 7x7 is a multiple of the eventual 28x28 size, so that makes sense somewhat, but what is the 256 all about? And then in the following layers, I notice a pattern but I'm not sure how to re-write it so it works for a wholly different image size. Any help or direction is appreciated.Thanks!</p>
<p>EDIT:
Thanks to the helpful input I changed my gen to:</p>
<pre><code>def make_generator_model():
model = tf.keras.Sequential()
model.add(layers.Dense(8*8*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((8, 8, 256)))
assert model.output_shape == (None, 8, 8, 256) # Note: None is the batch size
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 8, 8, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 16, 16, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(32, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 32, 32, 32)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(16, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 64, 64, 16)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(8, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 128, 128, 8)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(3, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 256, 256, 3)
return model
</code></pre>
<p>and discriminator:</p>
<pre><code>def make_discriminator_model():
model = tf.keras.Sequential()
model.add(layers.Conv2D(8, (5, 5), strides=(2, 2), padding='same',
input_shape=[IMAGE_DIM[0], IMAGE_DIM[1], IMAGE_DIM[2]]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
print(model.output_shape)
model.add(layers.Conv2D(16, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
print(model.output_shape)
model.add(layers.Conv2D(32, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
print(model.output_shape)
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
print(model.output_shape)
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
print(model.output_shape)
model.add(layers.Conv2D(256, (5, 5), strides=(1, 1), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
print(model.output_shape)
model.add(layers.Flatten())
model.add(layers.Dense(1))
#16384 65536
return model
</code></pre> | 2020-08-04 19:18:45.747000+00:00 | 2020-08-05 19:08:46.960000+00:00 | 2020-08-05 19:08:46.960000+00:00 | tensorflow|machine-learning|keras | ['https://arxiv.org/abs/1511.06434', 'https://www.tensorflow.org/tutorials/generative/dcgan', 'https://stats.stackexchange.com/questions/360899/difference-between-strided-and-non-strided-convolution', 'https://arxiv.org/pdf/1603.07285.pdf'] | 4 |
6,779,374 | <p>Do you need just to check for self-intersections, or find all of them? The latter is harder than <code>O(N log N)</code>, as you can have <code>O(n^2)</code> intersections with <code>n</code> segments. </p>
<p>If you only need to find out if self-intersections exist, or find a small amount of them, then look <a href="http://arxiv.org/abs/0812.0893v2" rel="nofollow">here</a>. This paper seems to claim just what you need, particularly in the polygon planarization section. I doubt implementing the algorithm described there would be simple, or worthwhile for any problem of reasonable size. But such an algorithm does exist. Disclaimer: I haven't tried to work through the paper and understand it.</p> | 2011-07-21 16:21:32.663000+00:00 | 2011-07-21 16:21:32.663000+00:00 | null | null | 6,778,149 | <p>Imagine you have a 2D polygon (a 2D <a href="http://en.wikipedia.org/wiki/Polygonal_chain" rel="noreferrer">closed polygonal chain</a> to be more precise). How do you check if it contains self-intersections? It can be convex or concave, oriented clockwise or counter-clockwise.</p>
<p>Now, I could just run a standard <code>O(N log N)</code> algorithm to check if any two segments cross. But I believe that because we have some additional structure -- the order of the segments and the fact that each two consecutive segments meet at endpoints -- a simpler and faster (maybe <code>O(N)</code>?) algorithm could be devised.</p>
<p>Any ideas?</p> | 2011-07-21 14:58:37.310000+00:00 | 2011-07-21 16:21:32.663000+00:00 | null | geometry|computational-geometry | ['http://arxiv.org/abs/0812.0893v2'] | 1 |
70,145,517 | <p>You have a <code>2x2</code> table, but you are ignoring one of the rows. You need to add how many non-variants you had in each group. The answer to your question depends on that. Say you had 7 variants out of 30 cases and 0 variants out of 20 controls. Your contingency table would look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: center;">Cases</th>
<th style="text-align: left;">Controls</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">Variants</td>
<td style="text-align: center;">7</td>
<td style="text-align: left;">0</td>
</tr>
<tr>
<td style="text-align: right;">Non-variants</td>
<td style="text-align: center;">23</td>
<td style="text-align: left;">20</td>
</tr>
</tbody>
</table>
</div>
<p>Without this information, testing makes no sense. Now, you can use an exact test, such as Fisher's (conditional) or Barnard's (unconditional). I have described an unconditional test that seems to be more powerful than those (<a href="https://arxiv.org/abs/2110.14315" rel="nofollow noreferrer">link</a>). In this case,</p>
<pre><code>> library(mtest)
> m.test(list(c(7, 23), c(0, 20)))
[1] 0.01702871
</code></pre> | 2021-11-28 16:45:45.290000+00:00 | 2021-11-28 16:45:45.290000+00:00 | null | null | 69,523,809 | <p>I did a research on group of cases and controls. During my research I observed <strong>7 variants</strong> in group of cases while I <strong>did not observe</strong> any in controls.</p>
<p>I would like to test is there a <strong>significant difference between 7:0 finding</strong>.</p>
<p>I thought of doing Fishers exact test, but not sure how it can be performed on 1x2 table and is it a suitable test for such analysis. Also, I thought of correction, to exclude 0 from statistics, so maybe to compare 7.5 and <strong>0.5</strong>.</p>
<p>Is there <strong>a better test to perform</strong> for such a cases?</p>
<p>All suggestions are welcomed. Thank you.</p> | 2021-10-11 09:19:28.860000+00:00 | 2021-11-28 16:45:45.290000+00:00 | null | statistics | ['https://arxiv.org/abs/2110.14315'] | 1 |
69,574,744 | <h3>What happens?</h3>
<p>As you already mentioned - Seems to bee not well formed in soup cause it is missing the <code><link></code> it has only the <code></link></code>, so you wont get text out of it with <code>text</code> attribute.</p>
<p>But good news there is a solution.</p>
<h3>How to fix?</h3>
<p>Just select the text with <code>next_sibling</code> of the <code><link></code> element:</p>
<pre><code>i.find('link').next_sibling
</code></pre>
<h3>Output</h3>
<pre><code>[{"title": "Gitlab from YC to IPO", "link": "https://blog.ycombinator.com/gitlab-from-yc-to-ipo/", "date": "Thu, 14 Oct 2021 13:31:43 +0000"}, {"title": "Apple Joins Blender Development Fund", "link": "https://www.blender.org/press/apple-joins-blender-development-fund/", "date": "Thu, 14 Oct 2021 14:48:59 +0000"}, {"title": "Sunset Geometry (2016)", "link": "https://www.shapeoperator.com/2016/12/12/sunset-geometry/", "date": "Thu, 14 Oct 2021 14:29:08 +0000"}, {"title": "iPhone Macro: A Big Day for Small Things", "link": "https://lux.camera/iphone-macro-camera-a-big-day-for-small-things/", "date": "Mon, 11 Oct 2021 10:22:06 +0000"}, {"title": "Michelin Airless", "link": "https://www.michelin.com/en/innovation/vision-concept/airless/", "date": "Thu, 14 Oct 2021 14:36:58 +0000"}, {"title": "Release (YC W20) Is Hiring \u2013 Product Marketing Manager", "link": "https://releasehub.com/company#hire", "date": "Thu, 14 Oct 2021 17:00:15 +0000"}, {"title": "Global Climate Report \u2013 September 2021", "link": "https://www.ncdc.noaa.gov/sotc/global/202109", "date": "Thu, 14 Oct 2021 14:49:59 +0000"}, {"title": "Esbuild \u2013 An extremely fast JavaScript bundler", "link": "https://esbuild.github.io/", "date": "Thu, 14 Oct 2021 05:07:27 +0000"}, {"title": "Small Language Models Are Also Few-Shot Learners", "link": "https://aclanthology.org/2021.naacl-main.185/", "date": "Tue, 12 Oct 2021 09:59:34 +0000"}, {"title": "Who was Aleph Null? (2013)", "link": "http://bit-player.org/2013/who-was-aleph-null", "date": "Mon, 11 Oct 2021 08:35:29 +0000"}, {"title": "Hands-On Rust: Effective Learning Through 2D Game Development and Play", "link": "https://pragprog.com/titles/hwrust/hands-on-rust/", "date": "Thu, 14 Oct 2021 07:59:24 +0000"}, {"title": "Ask HN: What's the Point of Life?", "link": "https://news.ycombinator.com/item?id=28866558", "date": "Thu, 14 Oct 2021 16:38:15 +0000"}, {"title": "What I wish I knew when learning F#", "link": "https://danielbachler.de/2020/12/23/what-i-wish-i-knew-when-learning-fsharp.html", "date": "Thu, 14 Oct 2021 12:07:40 +0000"}, {"title": "Investing in Startups by Passing the Series 65", "link": "https://www.natecation.com/accredited-investor-investing-startups-series-65/", "date": "Wed, 13 Oct 2021 17:57:25 +0000"}, {"title": "OpenBSD 7.0", "link": "https://www.openbsd.org/70.html", "date": "Thu, 14 Oct 2021 10:24:21 +0000"}, {"title": "Countries are gathering in an effort to stop a biodiversity collapse", "link": "https://www.nytimes.com/2021/10/14/climate/un-biodiversity-conference-climate-change.html", "date": "Thu, 14 Oct 2021 13:32:00 +0000"}, {"title": "Alden Global Capital, the secretive hedge fund gutting newsrooms", "link": "https://www.theatlantic.com/magazine/archive/2021/11/alden-global-capital-killing-americas-newspapers/620171/", "date": "Thu, 14 Oct 2021 15:17:06 +0000"}, {"title": "Child suicides in Japan hit record high", "link": "https://www3.nhk.or.jp/nhkworld/en/news/20211013_19/", "date": "Thu, 14 Oct 2021 08:52:39 +0000"}, {"title": "Every search bar looks like a URL bar to users", "link": "https://shkspr.mobi/blog/2021/10/every-search-bar-looks-like-a-url-bar-to-users/", "date": "Thu, 14 Oct 2021 13:27:58 +0000"}, {"title": "Psychonetics: A nerd's toolset to work with mind and perception", "link": "http://deconcentration-of-attention.com/psychonetics.html", "date": "Tue, 12 Oct 2021 11:28:43 +0000"}, {"title": "FB seals off some internal message boards to prevent leaking, immediately leaked", "link": "https://www.businessinsider.com/facebook-whistleblower-leaks-restricts-staff-access-message-boards-elections-safety-2021-10", "date": "Thu, 14 Oct 2021 11:09:08 +0000"}, {"title": "Working around expired root certificates", "link": "https://scotthelme.co.uk/should-clients-care-about-the-expiration-of-a-root-certificate/", "date": "Mon, 11 Oct 2021 21:27:27 +0000"}, {"title": "An unprecedented wave of online bank fraud is hitting Britain", "link": "https://www.reuters.com/world/uk/welcome-britain-bank-scam-capital-world-2021-10-14/", "date": "Thu, 14 Oct 2021 09:57:39 +0000"}, {"title": "Interoperable Serendipity", "link": "https://noeldemartin.com/blog/interoperable-serendipity", "date": "Wed, 13 Oct 2021 12:02:37 +0000"}, {"title": "Instagram took down post with figure from paper showing male advantage in sports", "link": "https://twitter.com/SwipeWright/status/1448064426670583814", "date": "Thu, 14 Oct 2021 16:36:30 +0000"}, {"title": "IoT hacking and rickrolling my high school district", "link": "https://whitehoodhacker.net/posts/2021-10-04-the-big-rick", "date": "Tue, 12 Oct 2021 19:38:06 +0000"}, {"title": "Boeing says certain 787 parts improperly manufactured", "link": "https://www.reuters.com/business/aerospace-defense/boeing-deals-with-new-defect-787-dreamliner-wsj-2021-10-14/", "date": "Thu, 14 Oct 2021 13:26:46 +0000"}, {"title": "Practice Problems for Hardware Engineers", "link": "https://arxiv.org/abs/2110.06526", "date": "Thu, 14 Oct 2021 03:48:24 +0000"}, {"title": "Interface ergonomics: automation isn't just about time saved", "link": "https://macoy.me/blog/programming/InterfaceFriction", "date": "Wed, 13 Oct 2021 01:05:52 +0000"}, {"title": "Syncthing \u2013 a continuous file synchronization program", "link": "https://syncthing.net/", "date": "Thu, 14 Oct 2021 01:23:19 +0000"}]
</code></pre> | 2021-10-14 17:10:45.047000+00:00 | 2021-10-14 17:18:30.457000+00:00 | 2021-10-14 17:18:30.457000+00:00 | null | 69,573,625 | <p>I wanna scrape the data from code of this link: <a href="https://news.ycombinator.com/rss" rel="nofollow noreferrer">https://news.ycombinator.com/rss</a>. It includes the html syntax: "link>the URL</link' (It's full of open and close <> but cannot put it in here).
However, when using this code, the printed output of the link is: 'link/>the URL' and there are no content of the key 'link' in json file.</p>
<pre><code>import requests
import bs4
from bs4 import BeautifulSoup
import json
import html5lib
def rss(x):
r = requests.get(x)
s = BeautifulSoup(r.content, features='html5lib')
the_list = []
for i in s.find_all('item'):
title = i.find('title').text
link = i.find('link').text
date = i.find('pubdate').text
article = {
'title' : title,
'link' : link,
'date' : date
}
the_list.append(article)
with open('the_list.json','w') as f:
json.dump(the_list,f)
rss('https://news.ycombinator.com/rss')
</code></pre> | 2021-10-14 15:43:22.290000+00:00 | 2021-10-14 17:18:30.457000+00:00 | 2021-10-14 16:03:29.753000+00:00 | json|beautifulsoup|python-requests|rss | [] | 0 |
58,076,540 | <p>If you plan to use neural networks with TensorFlow or Pytorch, it will be easy. As long as you can express the function <code>U</code> within the framework and the utility function is reasonably close to being continuous, you can back-propagate the utility to the network. You just ask the optimizer to maximize the utility and that's it.</p>
<p>If the utility function is discrete, it gets tricky, but there are several tricks you might try. One of them is the <a href="http://www.cs.toronto.edu/~tingwuwang/REINFORCE.pdf" rel="nofollow noreferrer">REINFORCE algorithm</a> (Monte-Carlo policy gradient). Another trick that is getting quite popular is <a href="https://casmls.github.io/general/2017/02/01/GumbelSoftmax.html" rel="nofollow noreferrer">Gubmle softmax</a> that allows sampling of discrete action and propagating the error to the network.</p>
<p>If you plan to use different classifiers (like decision forests or whatever), you might try something based on imitation learning like the <a href="https://arxiv.org/abs/0907.0786" rel="nofollow noreferrer">SEARN algorithim</a>.</p> | 2019-09-24 08:54:16.720000+00:00 | 2019-09-24 14:09:07.513000+00:00 | 2019-09-24 14:09:07.513000+00:00 | null | 58,075,653 | <p>I have a problem which involves optimization of actions over time:</p>
<ul>
<li>Lets assume I have a set of input variables <code>X</code> where each <code>X_i_t</code> has a
value at each point in time <code>t = 0 ... T</code>.</li>
<li>For each point in time, I would like to choose an action <code>a_t</code> of a set of
actions <code>A</code>,</li>
<li>such that a utility function <code>U(a0, ..., a_T)</code> is maximized.</li>
</ul>
<p>Note, the utility function does not have a closed-form solution and its value depends on the entire sequence of actions <code>a_0 ... a_T</code>.</p>
<p><strong>How would I implement something like this?</strong> I am perfectly happy with a keyword I can use to look up relevant literature. I do not need a full solution. - Though if somebody can point me to a python sklearn function which does this, I would definitly not say no...</p>
<p>My first intuition was "logistic regression" but there is no way to assign "correct labels" to an action <code>a_t</code> at time <code>t</code>, since the utility depends on the actions taken earlier and later in the time series.</p> | 2019-09-24 08:00:12.147000+00:00 | 2019-09-24 14:09:07.513000+00:00 | null | machine-learning|artificial-intelligence|classification | ['http://www.cs.toronto.edu/~tingwuwang/REINFORCE.pdf', 'https://casmls.github.io/general/2017/02/01/GumbelSoftmax.html', 'https://arxiv.org/abs/0907.0786'] | 3 |
67,898,877 | <p>You are asking about <a href="https://arxiv.org/pdf/2007.00487.pdf" rel="nofollow noreferrer">continual learning</a> - this is a very active field of research, and there is no single solution/method to tackle it. You'll have to do more research to find the right approach for your specific settings.</p> | 2021-06-09 06:44:53.420000+00:00 | 2021-06-09 06:44:53.420000+00:00 | null | null | 67,898,366 | <p>I used yolov5 to train an object detection model. is it possible to add more annotated images after i have already trained the original model or must i restart the whole training with the new set of images?</p> | 2021-06-09 05:56:51.893000+00:00 | 2021-06-09 06:45:23.573000+00:00 | 2021-06-09 06:45:23.573000+00:00 | machine-learning|pytorch|object-detection | ['https://arxiv.org/pdf/2007.00487.pdf'] | 1 |
67,475,813 | <p>It sounds like you are roughly interested in what is referred to as <em>neural network verification</em>. This field broadly consists of answering the question: given a range of possible inputs, what is the range of possible outputs from a neural network with a given set of weights? A few things to note:</p>
<ol>
<li><p>A neural network is essentially a complex, non-linear function. That is, it maps the input space to the output space. Defining an output range does not make sense except with respect to an input range. In your question you make no reference to the inputs, so your examples are flawed/incomplete.</p>
</li>
<li><p>In general, neural network verification is an emerging field with most published works being fairly recent (last 5-7 years). That being said, there are exact and approximate methods for fully connected networks with a variety of activation functions. I'll list a few such methods here:</p>
</li>
</ol>
<p><a href="https://arxiv.org/abs/2004.05519" rel="nofollow noreferrer">https://arxiv.org/abs/2004.05519</a> - MATLAB toolbox, but you could export your neural network in ONNX format and then use MATLAB for the verification/output range analysis.</p>
<p><a href="https://arxiv.org/abs/1804.10829" rel="nofollow noreferrer">https://arxiv.org/abs/1804.10829</a> - specifically for ReLU activation function.</p>
<p><a href="https://anwu1219.github.io/download/Marabou.pdf" rel="nofollow noreferrer">https://anwu1219.github.io/download/Marabou.pdf</a> with python API available here: <a href="https://github.com/NeuralNetworkVerification/Marabou" rel="nofollow noreferrer">https://github.com/NeuralNetworkVerification/Marabou</a></p>
<p>The field is still evolving so you may have to do some of the codings yourself rather than using pre-existing libraries in some cases, but these papers/ a search query for <em>neural network verification</em> should at least give you some ideas of where to start.</p> | 2021-05-10 18:27:42.250000+00:00 | 2021-05-15 12:25:28.800000+00:00 | 2021-05-15 12:25:28.800000+00:00 | null | 67,448,783 | <p>Let's assume I have a neural network like the following:</p>
<pre><code>model = keras.models.Sequential()
model.add(keras.layers.Dense(10, input_shape=(5,), activation='relu'))
model.add(keras.layers.Dense(4, activation='linear'))
</code></pre>
<p>With n output neurons with a linear activation function.</p>
<p>The training process is not important here, so we can take a look at the random weights that keras initialized using:</p>
<pre><code>model.weights
</code></pre>
<p>Of course, in a real example, these weights should be adjusted in the training process.</p>
<p>Depending on these <code>model.weights</code>, each of the output neurons returns values in a range.</p>
<p>I would like to calculate this exact range.</p>
<p>Does keras offer any function to calculate it?</p>
<p>I built a flawed piece of code to make an approximation of it, using a loop and predicting random inputs. But this would not be really useful in a real example with much more inputs/neurons/weights.</p>
<p>Here a few examples trying to clarify my question (All of them assume that the input values are between and 1):</p>
<pre><code>model = keras.models.Sequential()
model.add(keras.layers.Dense(1, input_shape=(2,),
activation='linear', use_bias=False))
model.set_weights([np.array([1, 1]).reshape(2, 1)])
</code></pre>
<p>For the previous example the output neuron results would be between 0 and 2</p>
<pre><code>model.set_weights([np.array([-0.5, 1]).reshape(2, 1)])
</code></pre>
<p>For the previous example the output neuron results would be between -0.5 and 1</p>
<pre><code>model = keras.models.Sequential()
model.add(keras.layers.Dense(2, input_shape=(2,), activation='linear', use_bias=False))
model.add(keras.layers.Dense(1, activation='linear', use_bias=False))
model.set_weights([np.array([1, 1, 1, 1]).reshape(2,2), np.array([1, 1]).reshape(2,1)])
</code></pre>
<p>For the previous example, the output neuron results would be between <code>0</code> and <code>4</code></p>
<p>These are simplified examples. In a real scenario with a much complex network structure, activation functions, bias..... these ranges are not obvious to calculate.</p> | 2021-05-08 14:46:02.397000+00:00 | 2021-05-15 12:25:28.800000+00:00 | 2021-05-15 12:23:26+00:00 | python|tensorflow|keras|deep-learning|neural-network | ['https://arxiv.org/abs/2004.05519', 'https://arxiv.org/abs/1804.10829', 'https://anwu1219.github.io/download/Marabou.pdf', 'https://github.com/NeuralNetworkVerification/Marabou'] | 4 |
57,893,292 | <blockquote>
<p>Depthwise convolutions provide significant performance benefits
owing to the reduction in both parameters and mult-adds.
However, training depthwise convolution layers with GPUs is slow
in current deep learning frameworks because their implementations
cannot fully utilize the GPU capacity.</p>
</blockquote>
<p><a href="https://arxiv.org/pdf/1803.09926.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1803.09926.pdf</a></p> | 2019-09-11 16:25:58.307000+00:00 | 2019-10-15 09:20:49.337000+00:00 | 2019-10-15 09:20:49.337000+00:00 | null | 39,368,367 | <p>I am trying out a recent arxiv work called "<a href="http://128.84.21.199/abs/1608.04337" rel="noreferrer">Factorized CNN</a>",</p>
<p>which mainly argues that spatially separated convolution (depth-wise convolution), together with channel-wise linear projection(1x1conv), can speed up the convolution operation.</p>
<p><a href="http://i.stack.imgur.com/qaSdE.png" rel="noreferrer">this is the figure for their conv layer architecture</a></p>
<p>I found out that I can implement this architecture with tf.nn.depthwise_conv2d and 1x1 convolution, or with tf.nn.separable_conv2d.</p>
<p>below is my implementation:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>#conv filter for depthwise convolution
depthwise_filter = tf.get_variable("depth_conv_w", [3,3,64,1], initializer=tf.random_normal_initializer(stddev=np.sqrt(2.0/9/32)))
#conv filter for linear channel projection
pointwise_filter = tf.get_variable("point_conv_w", [1,1,64,64], initializer=tf.random_normal_initializer(stddev=np.sqrt(2.0/1/64)))
conv_b = tf.get_variable("conv_b", [64], initializer=tf.constant_initializer(0))
#depthwise convolution, with multiplier 1
conv_tensor = tf.nn.relu(tf.nn.depthwise_conv2d(tensor, depthwise_filter, [1,1,1,1], padding='SAME'))
#linear channel projection with 1x1 convolution
conv_tensor = tf.nn.bias_add(tf.nn.conv2d(conv_tensor, pointwise_filter, [1,1,1,1], padding='VALID'), conv_b)
#residual
tensor = tf.add(tensor, conv_tensor)</code></pre>
</div>
</div>
</p>
<p>This should be around 9 times faster than the original 3x3x64 -> 64 channel convolution.</p>
<p>However, I cannot experience any performance improvement.</p>
<p>I must assume that I am doing this wrong, or there's something wrong with tensorflow's implementation.</p>
<p>Since there is few example using depthwise_conv2d, I am leaving this question here.</p>
<p>Is this slow speed normal? or is there any mistake?</p> | 2016-09-07 11:13:06.563000+00:00 | 2019-10-15 09:20:49.337000+00:00 | 2016-09-07 11:31:33.150000+00:00 | python|tensorflow | ['https://arxiv.org/pdf/1803.09926.pdf'] | 1 |
54,819,367 | <p>Although comparing only 2 algorithms: Myers and Histogram, it might help. <a href="https://arxiv.org/abs/1902.02467" rel="nofollow noreferrer">A study by Nugroho et al.</a> reveals the level of disagreement between both <code>diff</code> algorithms. The study performed 3 comparisons, namely metrics, SZZ algorithm, and patches. From the comparison of metrics and SZZ, we can see the high differences between Myers and Histogram in the number of different identified code changes. It is true that none of those <code>diff's</code> are incorrect in describing changes. However, from the manual patches comparison, the Histogram algorithm provides a reasonable <code>diff</code> output better in describing human change intention.</p> | 2019-02-22 02:49:29.147000+00:00 | 2019-02-25 05:55:34.003000+00:00 | 2019-02-25 05:55:34.003000+00:00 | null | 19,949,526 | <p>Git offers these 4 diff algorithms, but without any further information what are their differences.</p>
<p>What are the advantages of each of this algorithms? Is there some comparison of various cases where the algorithms perform differently?</p> | 2013-11-13 09:04:50.707000+00:00 | 2019-02-25 05:55:34.003000+00:00 | 2013-11-18 10:01:57.167000+00:00 | diff|git-diff | ['https://arxiv.org/abs/1902.02467'] | 1 |
65,995,141 | <p>For people landing here in the future: the 'new' <a href="https://github.com/agda/agda-categories" rel="noreferrer">agda-categories</a> has some documentation in its readme about the choices, and also there is a <a href="https://arxiv.org/abs/2005.07059" rel="noreferrer">published paper</a> about it that goes into a lot more details regarding the design choices.</p> | 2021-02-01 15:10:42.140000+00:00 | 2021-02-01 15:10:42.140000+00:00 | null | null | 29,837,472 | <p>I'm trying to understand the Categories library, but I'm fairly new to Agda, so I'm looking for some sort of document explaining the choices that were made in the implementation of the library. There's a link to such a thing in the readme, but it's broken. </p> | 2015-04-24 02:03:53.900000+00:00 | 2021-02-01 15:10:42.140000+00:00 | null | agda|category-theory | ['https://github.com/agda/agda-categories', 'https://arxiv.org/abs/2005.07059'] | 2 |
19,990,424 | <p>I've usually seen neural networks used for this sort of recognition task, i.e. <a href="http://www.ccs.neu.edu/home/feneric/charrecnn.html" rel="nofollow noreferrer">here</a>, <a href="https://stackoverflow.com/questions/9684204/training-feedforward-neural-network-for-ocr">here</a> <a href="http://yann.lecun.com/exdb/publis/pdf/jackel-95.pdf" rel="nofollow noreferrer">here</a>, and <a href="http://arxiv.org/ftp/arxiv/papers/1211/1211.4385.pdf" rel="nofollow noreferrer">here</a>. Since a simple google search turns up so many hits for neural networks in OCR, I'll assume you are set in using HMMs (a project limitation, correct?) Regardless, these links can offer some insight into gridding the image and obtaining image features.</p>
<p>Your approach for turning a grid into a sequence of observations is reasonable. In this case, be sure you do not confuse observations and states. The features you extract from one block should be collected into one observation, i.e. a feature vector. (In comparison to speech recognition, your block's feature vector is analogous to the feature vector associated with a speech phoneme.) You don't really have much information regarding the underlying states. This is the hidden aspect of HMMs, and the training process should inform the model how likely one feature vector is to follow another for a character (i.e. transition probabilities).</p>
<p>Since this is an off-line process, don't be concerned with the temporal aspects of how characters are actually drawn. For the purposes of your task, you've imposed a temporal order on the sequence of observations with your the left-to-right, top-to-bottom block sequence. This should work fine.</p>
<p>As for HMM performance: choose a reasonable vector of salient features. In speech recog, the dimensionality of a feature vector can be high (>10). (This is also where the cited literature can assist.) Set aside a percentage of the training data so that you can properly test the model. First, train the model, and then evaluate the model on the training dataset. How well does classify your characters? If it does poorly, re-evaluate the feature vector. If it does well on the test data, test the generality of the classifier by running it on the reserved test data.</p>
<p>As for the number of states, I would start with something heuristically derived number. Assuming your character images are scaled and normalized, perhaps something like 40%(?) of the blocks are occupied? This is a crude guess on my part since a source image was not provided. For an 8x8 grid, this would imply that 25 blocks are occupied. We could then start with 25 states - but that's probably naive: empty blocks can convey information (meaning the number of states might increase), but some features sets may be observed in similar states (meaning the number of states might decrease.) If it were me, I would probably pick something like 20 states. Having said that: be careful not to confuse features and states. Your feature vector is a representation of things observed in a particular state. If the tests described above show your model is performing poorly, tweak the number of states up or down and try again.</p>
<p>Good luck.</p> | 2013-11-14 23:15:38.273000+00:00 | 2013-11-14 23:15:38.273000+00:00 | 2017-05-23 11:59:29.003000+00:00 | null | 19,747,606 | <p>I have extracted features from many images of isolated characters (such as gradient, neighbouring pixel weight and geometric properties. How can I use HMMs as a classifier trained on this data? All literature I read about HMM refers to states and state transitions but I can't connect it to features and class labeling. The example on JAHMM's home page doesn't relate to my problem.
I need to use HMM not because it will work better than other approaches for this problem but because of constraints on project topic.</p>
<p>There was an answer to <a href="https://stackoverflow.com/questions/9386603/how-can-hmms-be-used-for-handwriting-recognition">this</a> question for online recognition but I want the same for offline and in a little more detail</p>
<p>EDIT: I partitioned each character into a grid with fixed number of squares. Now I am planning to perform feature extraction on each grid block and thus obtain a sequence of features for each sample by moving from left to right and top to bottom. </p>
<ol>
<li><p>Would this represent an adequate "sequence" for an HMM i.e. would an HMM be able to guess the temporal variation of the data, even though the character is not drawn from left to right and top to bottom? If not suggest an alternate way. </p></li>
<li><p>Should I feed a lot of features or start with a few? how do I know if the HMM is underforming or if the features are bad? I am using JAHMM.</p></li>
<li><p>Extracting stroke features is difficult and cant be logically combined with grid features? (since HMM expects a sequence generated by some random process)</p></li>
</ol> | 2013-11-02 22:27:01.987000+00:00 | 2013-11-14 23:15:38.273000+00:00 | 2017-05-23 12:15:38.033000+00:00 | ocr|classification|hidden-markov-models | ['http://www.ccs.neu.edu/home/feneric/charrecnn.html', 'https://stackoverflow.com/questions/9684204/training-feedforward-neural-network-for-ocr', 'http://yann.lecun.com/exdb/publis/pdf/jackel-95.pdf', 'http://arxiv.org/ftp/arxiv/papers/1211/1211.4385.pdf'] | 4 |
64,063,765 | <p>The following code is based on a <a href="https://stackoverflow.com/questions/30463616/payne-hanek-algorithm-implementation-in-c">previous answer</a> in which I demonstrated how to perform a fairly accurate argument reduction for trigonometric functions by using the Cody-Waite method of split constants for arguments small in magnitude, and the Payne-Hanek method for arguments large in magnitude. For details on the Payne-Hanek algorithm see there, for details on the Cody-Waite algorithm see this <a href="https://stackoverflow.com/a/42541486/780717">previous answer</a> of mine.</p>
<p>Here I have made adjustments necessary to adjust to the restrictions of the asker's platform, in that no 64-bit types are supported, fused multiply-add is not supported, and helper functions from <code>math.h</code> are not available. I am assuming that <code>float</code> maps to IEEE-754 <code>binary32</code> format, and that there is a way to re-interpret such a 32-bit float as a 32-bit unsigned integer and vice versa. I have implemented this re-interpretation via the standard portable idiom, that is, by using <code>memcpy()</code>, but other methods may be chosen appropriate for the unspecified target platform, such as inline assembly, machine-specific intrinsics, or volatile unions.</p>
<p>Since this code is basically a port of my previous code to a more restrictive environment, it lacks perhaps the elegance of a <em>de novo</em> design specifically targeted at that environment. I have basically replaced the <code>frexp()</code> helper function from <code>math.h</code> with some bit twiddling, emulated 64-bit integer computation with pairs of 32-bit integers, replaced the double-precision computation with 32-bit fixed-point computation (which worked much better than I had anticipated), and replaced all FMAs with the unfused equivalent.</p>
<p>Re-working the Cody-Waite portion of the argument reduction took quite a bit of work. Clearly, without FMA available, we need to ensure a sufficient number of trailing zero bits in the constituent parts of the constant π/2 (except the least significant one) to make sure the products are exact. I spent several hours experimentally puzzling out a particular split that delivers accurate results but also pushes the switchover point to the Payne-Hanek method as high as possible.</p>
<p>When <code>USE_FMA = 1</code> is specified, the output of the test app, when compiled with a high-quality math library, should look similar to this:</p>
<pre><code>Testing sinf ... PASSED. max ulp err = 1.493253 diffsum = 337633490
Testing cosf ... PASSED. max ulp err = 1.495098 diffsum = 342020968
</code></pre>
<p>With <code>USE_FMA = 0</code> the accuracy changes slightly for the worse:</p>
<pre><code>Testing sinf ... PASSED. max ulp err = 1.498012 diffsum = 359702532
Testing cosf ... PASSED. max ulp err = 1.504061 diffsum = 364682650
</code></pre>
<p>The <code>diffsum</code> output is a <em>rough</em> indicator of overall accuracy, here showing that about 90% of all inputs result in a correctly rounded single-precision response.</p>
<p>Note that it is important to compile the code with the strictest floating-point settings and highest degree of adherence to IEEE-754 the compiler offers. For the Intel compiler that I used to develop and test this code, that can be achieved by compiling with <code>/fp:strict</code>. Also, the quality of the math library used for reference is crucial for accurate assessment of the ulp error of this single-precision code. The Intel compiler comes with a math library that provides double-precision elementary math functions with just slightly over 0.5 ulp error in the HA (high accuracy) variant. Use of a multi-precision reference library may be preferable but would have slowed me down too much here.</p>
<pre class="lang-c prettyprint-override"><code>#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h> // for memcpy()
#include <math.h> // for test purposes, and when PORTABLE=1 or USE_FMA=1
#define USE_FMA (0) // use fmaf() calls for arithmetic
#define PORTABLE (0) // allow helper functions from math.h
#define HAVE_U64 (0) // 64-bit integer type available
#define CW_STAGES (3) // number of stages in Cody-Waite reduction when USE_FMA=0
#if USE_FMA
#define SIN_RED_SWITCHOVER (117435.992f)
#define COS_RED_SWITCHOVER (71476.0625f)
#define MAX_DIFF (1)
#else // USE_FMA
#if CW_STAGES == 2
#define SIN_RED_SWITCHOVER (3.921875f)
#define COS_RED_SWITCHOVER (3.921875f)
#elif CW_STAGES == 3
#define SIN_RED_SWITCHOVER (201.15625f)
#define COS_RED_SWITCHOVER (142.90625f)
#endif // CW_STAGES
#define MAX_DIFF (2)
#endif // USE_FMA
/* re-interpret the bit pattern of an IEEE-754 float as a uint32 */
uint32_t float_as_uint32 (float a)
{
uint32_t r;
memcpy (&r, &a, sizeof r);
return r;
}
/* re-interpret the bit pattern of a uint32 as an IEEE-754 float */
float uint32_as_float (uint32_t a)
{
float r;
memcpy (&r, &a, sizeof r);
return r;
}
/* Compute the upper 32 bits of the product of two unsigned 32-bit integers */
#if HAVE_U64
uint32_t umul32_hi (uint32_t a, uint32_t b)
{
return (uint32_t)(((uint64_t)a * b) >> 32);
}
#else // HAVE_U64
/* Henry S. Warren, "Hacker's Delight, 2nd ed.", Addison-Wesley 2012. Fig. 8-2 */
uint32_t umul32_hi (uint32_t a, uint32_t b)
{
uint16_t a_lo = (uint16_t)a;
uint16_t a_hi = a >> 16;
uint16_t b_lo = (uint16_t)b;
uint16_t b_hi = b >> 16;
uint32_t p0 = (uint32_t)a_lo * b_lo;
uint32_t p1 = (uint32_t)a_lo * b_hi;
uint32_t p2 = (uint32_t)a_hi * b_lo;
uint32_t p3 = (uint32_t)a_hi * b_hi;
uint32_t t = (p0 >> 16) + p1;
return (t >> 16) + (((uint32_t)(uint16_t)t + p2) >> 16) + p3;
}
#endif // HAVE_U64
/* 190 bits of 2/PI for Payne-Hanek style argument reduction. */
const uint32_t two_over_pi_f [] =
{
0x28be60db,
0x9391054a,
0x7f09d5f4,
0x7d4d3770,
0x36d8a566,
0x4f10e410
};
/* Reduce a trig function argument using the slow Payne-Hanek method */
float trig_red_slowpath_f (float a, int *quadrant)
{
uint32_t ia, hi, mid, lo, tmp, i, l, h, plo, phi;
int32_t e, q;
float r;
#if PORTABLE
ia = (uint32_t)(fabsf (frexpf (a, &e)) * 4.29496730e+9f); // 0x1.0p32
#else // PORTABLE
ia = ((float_as_uint32 (a) & 0x007fffff) << 8) | 0x80000000;
e = ((float_as_uint32 (a) >> 23) & 0xff) - 126;
#endif // PORTABLE
/* compute product x * 2/pi in 2.62 fixed-point format */
i = (uint32_t)e >> 5;
e = (uint32_t)e & 31;
hi = i ? two_over_pi_f [i-1] : 0;
mid = two_over_pi_f [i+0];
lo = two_over_pi_f [i+1];
tmp = two_over_pi_f [i+2];
if (e) {
hi = (hi << e) | (mid >> (32 - e));
mid = (mid << e) | (lo >> (32 - e));
lo = (lo << e) | (tmp >> (32 - e));
}
/* compute 64-bit product phi:plo */
phi = 0;
l = ia * lo;
h = umul32_hi (ia, lo);
plo = phi + l;
phi = h + (plo < l);
l = ia * mid;
h = umul32_hi (ia, mid);
plo = phi + l;
phi = h + (plo < l);
l = ia * hi;
phi = phi + l;
/* split fixed-point result into integer and fraction portions */
q = phi >> 30; // integral portion = quadrant<1:0>
phi = phi & 0x3fffffff; // fraction
if (phi & 0x20000000) { // fraction >= 0.5
phi = phi - 0x40000000; // fraction - 1.0
q = q + 1;
}
/* compute remainder of x / (pi/2) */
#if USE_FMA
float phif, plof, chif, clof, thif, tlof;
phif = 1.34217728e+8f * (float)(int32_t)(phi & 0xffffffe0); // 0x1.0p27
plof = (float)((plo >> 5) | (phi << (32-5)));
thif = phif + plof;
plof = (phif - thif) + plof;
phif = thif;
chif = 1.08995894e-17f; // 0x1.921fb6p-57 // (1.5707963267948966 * 0x1.0p-57)_hi
clof = -3.03308686e-25f; // -0x1.777a5cp-82 // (1.5707963267948966 * 0x1.0p-57)_lo
thif = phif * chif;
tlof = fmaf (phif, chif, -thif);
tlof = fmaf (phif, clof, tlof);
tlof = fmaf (plof, chif, tlof);
r = thif + tlof;
#else // USE_FMA
/* record sign of fraction */
uint32_t s = phi & 0x80000000;
/* take absolute value of fraction */
if ((int32_t)phi < 0) {
phi = ~phi;
plo = 0 - plo;
phi += (plo == 0);
}
/* normalize fraction */
e = 0;
while ((int32_t)phi > 0) {
phi = (phi << 1) | (plo >> 31);
plo = plo << 1;
e--;
}
/* multiply 32 high-order bits of fraction with pi/2 */
phi = umul32_hi (phi, 0xc90fdaa2); // (uint32_t)rint(PI/2 * 2**31)
/* normalize product */
if ((int32_t)phi > 0) {
phi = phi << 1;
e--;
}
/* round and convert to floating point */
uint32_t ri = s + ((e + 128) << 23) + (phi >> 8) + ((phi & 0xff) > 0x7e);
r = uint32_as_float (ri);
#endif // USE_FMA
if (a < 0.0f) {
r = -r;
q = -q;
}
*quadrant = q;
return r;
}
/* Argument reduction for trigonometric functions that reduces the argument
to the interval [-PI/4, +PI/4] and also returns the quadrant. It returns
-0.0f for an input of -0.0f
*/
float trig_red_f (float a, float switch_over, int *q)
{
float j, r;
if (fabsf (a) > switch_over) {
/* Payne-Hanek style reduction. M. Payne and R. Hanek, "Radian reduction
for trigonometric functions". SIGNUM Newsletter, 18:19-24, 1983
*/
r = trig_red_slowpath_f (a, q);
} else {
/* Cody-Waite style reduction. W. J. Cody and W. Waite, "Software Manual
for the Elementary Functions", Prentice-Hall 1980
*/
#if USE_FMA
j = fmaf (a, 6.36619747e-1f, 1.2582912e+7f); // 0x1.45f306p-1, 0x1.8p+23
j = j - 1.25829120e+7f; // 0x1.8p+23
r = fmaf (j, -1.57079601e+00f, a); // -0x1.921fb0p+00 // pio2_high
r = fmaf (j, -3.13916473e-07f, r); // -0x1.5110b4p-22 // pio2_mid
r = fmaf (j, -5.39030253e-15f, r); // -0x1.846988p-48 // pio2_low
#else // USE_FMA
j = (a * 6.36619747e-1f + 1.2582912e+7f); // 0x1.45f306p-1, 0x1.8p+23
j = j - 1.25829120e+7f; // 0x1.8p+23
#if CW_STAGES == 2
r = a - j * 1.57079625e+00f; // 0x1.921fb4p+0 // pio2_high
r = r - j * 7.54979013e-08f; // 0x1.4442d2p-24 // pio2_low
#elif CW_STAGES == 3
r = a - j * 1.57078552e+00f; // 0x1.921f00p+00 // pio2_high
r = r - j * 1.08043314e-05f; // 0x1.6a8880p-17 // pio2_mid
r = r - j * 2.56334407e-12f; // 0x1.68c234p-39 // pio2_low
#endif // CW_STAGES
#endif // USE_FMA
*q = (int)j;
}
return r;
}
/* Approximate sine on [-PI/4,+PI/4]. Maximum ulp error with USE_FMA = 0.64196
Returns -0.0f for an argument of -0.0f
Polynomial approximation based on T. Myklebust, "Computing accurate
Horner form approximations to special functions in finite precision
arithmetic", http://arxiv.org/abs/1508.03211, retrieved on 8/29/2016
*/
float sinf_poly (float a, float s)
{
float r, t;
#if USE_FMA
r = 2.86567956e-6f; // 0x1.80a000p-19
r = fmaf (r, s, -1.98559923e-4f); // -0x1.a0690cp-13
r = fmaf (r, s, 8.33338592e-3f); // 0x1.111182p-07
r = fmaf (r, s, -1.66666672e-1f); // -0x1.555556p-03
t = fmaf (a, s, 0.0f); // ensure -0 is passed through
r = fmaf (r, t, a);
#else // USE_FMA
r = 2.86567956e-6f; // 0x1.80a000p-19
r = r * s - 1.98559923e-4f; // -0x1.a0690cp-13
r = r * s + 8.33338592e-3f; // 0x1.111182p-07
r = r * s - 1.66666672e-1f; // -0x1.555556p-03
t = a * s + 0.0f; // ensure -0 is passed through
r = r * t + a;
#endif // USE_FMA
return r;
}
/* Approximate cosine on [-PI/4,+PI/4]. Maximum ulp error with USE_FMA = 0.87444 */
float cosf_poly (float s)
{
float r;
#if USE_FMA
r = 2.44677067e-5f; // 0x1.9a8000p-16
r = fmaf (r, s, -1.38877297e-3f); // -0x1.6c0efap-10
r = fmaf (r, s, 4.16666567e-2f); // 0x1.555550p-05
r = fmaf (r, s, -5.00000000e-1f); // -0x1.000000p-01
r = fmaf (r, s, 1.00000000e+0f); // 0x1.000000p+00
#else // USE_FMA
r = 2.44677067e-5f; // 0x1.9a8000p-16
r = r * s - 1.38877297e-3f; // -0x1.6c0efap-10
r = r * s + 4.16666567e-2f; // 0x1.555550p-05
r = r * s - 5.00000000e-1f; // -0x1.000000p-01
r = r * s + 1.00000000e+0f; // 0x1.000000p+00
#endif // USE_FMA
return r;
}
/* Map sine or cosine value based on quadrant */
float sinf_cosf_core (float a, int i)
{
float r, s;
s = a * a;
r = (i & 1) ? cosf_poly (s) : sinf_poly (a, s);
if (i & 2) {
r = 0.0f - r; // don't change "sign" of NaNs
}
return r;
}
/* maximum ulp error with USE_FMA = 1: 1.495098 */
float my_sinf (float a)
{
float r;
int i;
a = a * 0.0f + a; // inf -> NaN
r = trig_red_f (a, SIN_RED_SWITCHOVER, &i);
r = sinf_cosf_core (r, i);
return r;
}
/* maximum ulp error with USE_FMA = 1: 1.493253 */
float my_cosf (float a)
{
float r;
int i;
a = a * 0.0f + a; // inf -> NaN
r = trig_red_f (a, COS_RED_SWITCHOVER, &i);
r = sinf_cosf_core (r, i + 1);
return r;
}
/* re-interpret bit pattern of an IEEE-754 double as a uint64 */
uint64_t double_as_uint64 (double a)
{
uint64_t r;
memcpy (&r, &a, sizeof r);
return r;
}
double floatUlpErr (float res, double ref)
{
uint64_t i, j, err, refi;
int expoRef;
/* ulp error cannot be computed if either operand is NaN, infinity, zero */
if (isnan (res) || isnan (ref) || isinf (res) || isinf (ref) ||
(res == 0.0f) || (ref == 0.0f)) {
return 0.0;
}
/* Convert the float result to an "extended float". This is like a float
with 56 instead of 24 effective mantissa bits.
*/
i = ((uint64_t)float_as_uint32(res)) << 32;
/* Convert the double reference to an "extended float". If the reference is
>= 2^129, we need to clamp to the maximum "extended float". If reference
is < 2^-126, we need to denormalize because of the float types's limited
exponent range.
*/
refi = double_as_uint64(ref);
expoRef = (int)(((refi >> 52) & 0x7ff) - 1023);
if (expoRef >= 129) {
j = 0x7fffffffffffffffULL;
} else if (expoRef < -126) {
j = ((refi << 11) | 0x8000000000000000ULL) >> 8;
j = j >> (-(expoRef + 126));
} else {
j = ((refi << 11) & 0x7fffffffffffffffULL) >> 8;
j = j | ((uint64_t)(expoRef + 127) << 55);
}
j = j | (refi & 0x8000000000000000ULL);
err = (i < j) ? (j - i) : (i - j);
return err / 4294967296.0;
}
int main (void)
{
float arg, res, reff;
uint32_t argi, resi, refi;
int64_t diff, diffsum;
double ref, ulp, maxulp;
printf ("Testing sinf ... ");
diffsum = 0;
maxulp = 0;
argi = 0;
do {
arg = uint32_as_float (argi);
res = my_sinf (arg);
ref = sin ((double)arg);
reff = (float)ref;
resi = float_as_uint32 (res);
refi = float_as_uint32 (reff);
ulp = floatUlpErr (res, ref);
if (ulp > maxulp) {
maxulp = ulp;
}
diff = (resi > refi) ? (resi - refi) : (refi - resi);
if (diff > MAX_DIFF) {
printf ("\nerror @ %08x (% 15.8e): res=%08x (% 15.8e) ref=%08x (%15.8e)\n", argi, arg, resi, res, refi, reff);
return EXIT_FAILURE;
}
diffsum = diffsum + diff;
argi++;
} while (argi);
printf ("PASSED. max ulp err = %.6f diffsum = %lld\n", maxulp, diffsum);
printf ("Testing cosf ... ");
diffsum = 0;
maxulp = 0;
argi = 0;
do {
arg = uint32_as_float (argi);
res = my_cosf (arg);
ref = cos ((double)arg);
reff = (float)ref;
resi = float_as_uint32 (res);
refi = float_as_uint32 (reff);
ulp = floatUlpErr (res, ref);
if (ulp > maxulp) {
maxulp = ulp;
}
diff = (resi > refi) ? (resi - refi) : (refi - resi);
if (diff > MAX_DIFF) {
printf ("\nerror @ %08x (% 15.8e): res=%08x (% 15.8e) ref=%08x (%15.8e)\n", argi, arg, resi, res, refi, reff);
return EXIT_FAILURE;
}
diffsum = diffsum + diff;
argi++;
} while (argi);
diffsum = diffsum + diff;
printf ("PASSED. max ulp err = %.6f diffsum = %lld\n", maxulp, diffsum);
return EXIT_SUCCESS;
}
</code></pre> | 2020-09-25 12:02:05.247000+00:00 | 2022-03-23 10:03:16.143000+00:00 | 2022-03-23 10:03:16.143000+00:00 | null | 64,058,564 | <p>I have implemented some approximations for trigonometric functions (sin,cos,arctan) computed with single precision (32 bit floating point) in C. They are accurate to about +/- 2 ulp.</p>
<p>My <strong>target device</strong> does not support any <code><cmath></code> or <code><math.h></code> methods. It does not provide a FMA, but a MAC ALU. ALU and LU compute in 32 bit format.</p>
<p>My arctan approximation is actually a modified version of the <a href="https://stackoverflow.com/questions/26692859/best-machine-optimized-polynomial-minimax-approximation-to-arctangent-on-1-1">approximation of N.juffa</a>, which approximates arctan on the full range. Sine and cosine function are accurate up to 2 ulp within the range [-pi,pi].</p>
<p>I am now aiming to provide a larger input range (as large as possible, ideally [FLT_MIN,FLT_MAX]) for sine and cosine, which leads me to argument reduction.</p>
<p>I'm currently reading different papers like A<a href="https://www.csee.umbc.edu/%7Ephatak/645/supl/Ng-ArgReduction.pdf" rel="nofollow noreferrer">RGUMENT REDUCTION FOR HUGE ARGUMENTS:
Good to the Last Bit by K.C.Ng</a> or the paper about this <a href="https://core.ac.uk/download/pdf/189657632.pdf" rel="nofollow noreferrer">new argument reduction algorithm</a>, but I wasn't able to derive an implementation from it.</p>
<p>Also I want to mention two stackoverflow questions that refer to related problems: There is a <a href="https://stackoverflow.com/questions/9423516/range-reduction-poor-precision-for-single-precision-floating-point">approach with matlab and c++</a> which is based on the first paper I linked. It is actually using matlab, cmath methods and it limits the input to [0,20.000]. The other one was already mentioned in the comments. It is an approach to an implementation of sin and cos in C, using various c-libraries which are not available for me. Since both posts are already several years old, there might be some new findings.</p>
<p>It seems like the algorithm mostly used in this case is to store the number of 2/pi accurate up to the needed number of bits, to be able to compute the modulo calculation accurately and simultaneously avoid cancellation. My device does not provide a large DMEM, which means large look-up tables with hundreds of bits are not possible. This procedure is actually described on page 70 of <a href="https://www5.in.tum.de/%7Ehuckle/numericalcomputationguide.pdf" rel="nofollow noreferrer">this</a> reference, which by the way provides a lot of useful informatin about floating point math.</p>
<p>So my question is: Is there another efficient way to reduce the arguments for sine and cosine obtaining single precision avoiding large LUTs? The papers mentioned above actually focus on double precision and use up to 1000 digits, which is not suitable for my usecase.</p>
<p>I actually haven't found any implementation in C nor an implementation aiming single precision calculation, I would be grateful for any sorts of hints /links /examples...</p> | 2020-09-25 05:52:01.993000+00:00 | 2022-03-29 08:07:25.090000+00:00 | 2020-09-25 11:10:53.600000+00:00 | c|floating-point|precision|approximation|single-precision | ['https://stackoverflow.com/questions/30463616/payne-hanek-algorithm-implementation-in-c', 'https://stackoverflow.com/a/42541486/780717'] | 2 |
60,767,841 | <p>Your best bet for these things are GitHub searches. Here is one example of mine for a container used to support a <a href="https://stat430.com" rel="nofollow noreferrer">class I teach</a>, building on <a href="https://github.com/stat430dspm/rocker-pl/blob/master/Dockerfile" rel="nofollow noreferrer">another Dockerfile from our Rocker Project</a>. The link gives you the full details, but omitting bits for brevity here we have</p>
<pre><code>FROM rocker/r-ubuntu:18.04
# Install required libraries -- using prebuild binaries where available
RUN apt-get update && apt-get install -y \
git \
r-cran-data.table \
r-cran-devtools \
r-cran-doparallel \
r-cran-dygraphs \
r-cran-foreach \
r-cran-fs \
r-cran-future.apply \
r-cran-gh \
r-cran-git2r \
r-cran-igraph \
r-cran-memoise \
r-cran-microbenchmark \
r-cran-png \
r-cran-rcpparmadillo \
r-cran-rex \
r-cran-rsqlite \
r-cran-runit \
r-cran-shiny \
r-cran-stringdist \
r-cran-testthat \
r-cran-tidyverse \
r-cran-tinytest \
r-cran-xts \
sqlite3 \
sudo
# Install additional R packages from CRAN (on top of the ones
# pre-built as r-cran-*)
RUN install.r bench diffobj flexdashboard lintr ttdo unix
# Install plr -- for now (?) from GH; also install visualTest
RUN installGithub.r stat430dspm/plr MangoTheCat/visualTest
</code></pre>
<p>That pretty much covers it as we </p>
<ul>
<li>use the PPA by Michael Rutter to get as much as we can for Ubuntu LTS (currently still 18.04) via prebuild .deb packages for Ubuntu</li>
<li>use the <a href="https://github.com/eddelbuettel/littler" rel="nofollow noreferrer">littler</a> script <code>install.r</code> to install some packages from CRAN</li>
<li>use another <a href="https://github.com/eddelbuettel/littler" rel="nofollow noreferrer">littler</a> script <code>installGithub.r</code> to install two more repos from GitHub </li>
</ul>
<p>That just shows my preferred rank-ordering: binaries over CRAN over GitHub. The key point for your question is that these <a href="https://github.com/eddelbuettel/littler" rel="nofollow noreferrer">littler</a> scripts are also on the R-ver stack for Rocker. But with r-ver you have to <em>very careful</em> about mixing as the release date is fixed with MRAN <em>by design</em>. </p>
<p>(And in case anybody wants to know more about the <em>why</em> of <em>this</em> container, we just put a <a href="https://arxiv.org/abs/1912.11144" rel="nofollow noreferrer">pre-print on arXiv</a> -- but this is pretty specific to the testing and grading infrastructure we use.</p> | 2020-03-20 02:41:52.133000+00:00 | 2020-03-20 02:57:51.727000+00:00 | 2020-03-20 02:57:51.727000+00:00 | null | 60,767,597 | <p>I am trying to create a Docker image for my R script to schedule the R job on Google Cloud. I am currently testing it with a small R Script. The <code>docker build</code> commands fails at the step where I am installing the <code>rga</code> GitHub package. Below is my R script and the DockerFile:</p>
<p>R script:</p>
<pre><code>library(rga)
library(bigrquery)
bq_token()
rga.open(instance = "ga", where="~/ga.rga")
demoScheduleAPI <- function(){
search_perf <- ga$getData(XXXX, batch = TRUE, walk = TRUE,
start.date = "2020-01-15",
end.date = "2020-01-16",
metrics = "ga:searchUniques",
dimensions="ga:date,ga:hour,ga:searchKeyword, ga:searchCategory ,ga:dimension6,ga:dimension10")
project <- "bidone-data"
insert_upload_job(project, "GA_Export_Prod_DataSet", "Test_Table123", search_perf)
}
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM rocker/r-ver:3.6.1
RUN mkdir /home/bidone
RUN R -e "install.packages('bigrquery', repos='http://cran.rstudio.com/')"
RUN R -e "install.packages('devtools', repos='http://cloud.r-project.org')"
RUN R -e "devtools::install_github('skardhamar/rga')"
COPY .secrets /home/analysis/.secrets
COPY ga /home/analysis/ga
COPY DockerTest.R /home/analysis/DOckerTest.R
CMD R -e "source('/home/analysis/DockerTest.R')"
</code></pre>
<p>It does install devtools package, however when it tries to install the <code>rga</code> package from github, it gives the following error.</p>
<pre><code>> devtools::install_github('skardhamar/rga')
Error in loadNamespace(name) : there is no package called ‘devtools’
Calls: :: ... loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart
Execution halted
The command '/bin/sh -c R -e "devtools::install_github('skardhamar/rga')"' returned a non-zero code: 1
</code></pre>
<p>How can I fix this issue?</p> | 2020-03-20 02:02:45.550000+00:00 | 2020-04-03 10:44:53.430000+00:00 | 2020-04-03 10:44:53.430000+00:00 | r|docker|github|google-analytics|devtools | ['https://stat430.com', 'https://github.com/stat430dspm/rocker-pl/blob/master/Dockerfile', 'https://github.com/eddelbuettel/littler', 'https://github.com/eddelbuettel/littler', 'https://github.com/eddelbuettel/littler', 'https://arxiv.org/abs/1912.11144'] | 6 |
52,015,422 | <p>AdamOptimizer is using the <a href="https://arxiv.org/pdf/1412.6980v8.pdf" rel="nofollow noreferrer">Adam Optimizer</a> to update the learning rate. Its is an adaptive method compared to the gradient descent which maintains a single learning rate for all weight updates and the learning rate does not change.</p>
<p>Adam has the advantage over the GradientDescent of using the running average (momentum) of the gradients (mean) as well as the running average of the gradient squared.</p>
<p>There is no such thing as which one is the better to use, it is all dependent on your problem, network and data. But generally, Adam has proven itself to be leading and is one of the most commonly used within DL tasks, as it achieves better results and accuracy metrics.</p> | 2018-08-25 08:32:23.927000+00:00 | 2018-08-25 08:32:23.927000+00:00 | null | null | 52,014,332 | <p>I am trying to understand what is the difference between these Adam Optimizer and Gradient Descent Optimizer and which one is the best to use in which situation. I am looking into TF website, but if you know of place where these are explained in a better and easy to understand way, let me know?</p> | 2018-08-25 05:30:44.203000+00:00 | 2019-07-08 14:17:06.240000+00:00 | 2019-07-08 14:17:06.240000+00:00 | python-3.x|tensorflow|machine-learning|neural-network|deep-learning | ['https://arxiv.org/pdf/1412.6980v8.pdf'] | 1 |
60,236,013 | <p>I am not an OpenGL programmer. My interests mainly lies in algebraic projective geometry. </p>
<p>Per my understanding, The geometric meaning of the projection matrix in the OpenGL function <code>GLFrustum()</code>, which was explained in the official guide of OpenGL (the nineth edition of the book, OpenGL programming guide-- the official guide to learning OpenGL), from an algebraic point of view, actually corresponds to many geometric meanings that are unluckily different from what the authors expected. </p>
<p>A significant difference between them are:
1. in the official guide, the authors are illustrating a compound linear transform with one of its factors as <code>central projection</code>, so the final compound perspective projecction matrix should be a singular or degenerated matrix;
2. while still in the official guide, at the appendix, the <code>GLFrustum()</code> perspective projection matrix is a nonsingular 4 times 4 square matrix!</p>
<p>Note that: The authors are trying to explain a nonsingular matrix into a theoretically singular one!</p>
<p>The following matrix decomposition (not unique) corresponds to one of the geometric meanings of the nonsingualr <code>GLFrustum()</code> matrix:
<a href="https://i.stack.imgur.com/CUVlA.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>LaTeX code of the formulation: </p>
<pre><code>$$n\underbrace{\color{blue}\left[
\begin{array}{cccc}
1 & 0 & 0 & 0 \\[12pt]
0 & 1 & 0 & 0 \\[12pt]
0 & 0 &\dfrac{n+f}{(n-f) n} & 0 \\[12pt]
0 & 0 & 0 & 1 \\[12pt]
\end{array}
\right]}_{\color{red}(1)}\cdot \underbrace{\color{blue}\left[
\begin{array}{cccc}
\dfrac{ -2}{l-r} & 0 & 0 & \dfrac{l+r}{l-r} \\[12pt]
0 & \dfrac{ -2}{b-t} & 0 & \dfrac{b+t}{b-t} \\[12pt]
0 & 0 & 1 & 0 \\[12pt]
0 & 0 & 0 & 1 \\[12pt]
\end{array}
\right]}_{\color{red}(2)}\cdot\underbrace{\color{blue}\left[
\begin{array}{cccc}
1 & 0 & 0 & 0 \\[12pt]
0 & 1 & 0 & 0 \\[12pt]
0 & 0 & 1 & 0 \\[12pt]
0 & 0 & -\dfrac{1}{n} & 1 \\[12pt]
\end{array}
\right]}_{\color{red}(3)}\cdot\underbrace{\color{blue}\left[
\begin{array}{cccc}
1 & 0 & 0 & 0 \\[12pt]
0 & 1 & 0 & 0 \\[12pt]
0 & 0 & 1 & 1 \\[12pt]
0 & 0 & 0 & \dfrac{1}{n} \\[12pt]
\end{array}
\right]}_{\color{red}(4)}\cdot\underbrace{\color{blue}\left[
\begin{array}{cccc}
1 & 0 & 0 & 0 \\[12pt]
0 & 1 & 0 & 0 \\[12pt]
0 & 0 & 1 & 0 \\[12pt]
0 & 0 & 0 & \dfrac{2 n f}{f+n} \\[12pt]
\end{array}
\right]}_{\color{red}(5)} $$
</code></pre>
<p>All of the above matrix factors except for No. (2), have their geoemtric meaning clearly redefined in <a href="https://arxiv.org/abs/1307.0998" rel="nofollow noreferrer">Unified Framework of Elementary Geometric Transformations</a>. If you choose this factorization as the geometric meaning explanation of the <code>GLFrustum()</code> perspective projection matrix, you will have to make sure any computation or transforms you are doing in your code are consistent to its geometric meaning.</p>
<p>So when you are programming with OpenGL <code>GLFrustum()</code>, probably may have to compare with what has been illustrated in the official guide and what the <code>GLFrustum()</code> perspective projection matrix really means from the view point of pure algebraic projective geometry, and use any one of your own.</p> | 2020-02-15 05:09:32.193000+00:00 | 2020-02-15 05:09:32.193000+00:00 | null | null | 20,656,061 | <p>I have a program to do augmented reality and in OpenGL I use :</p>
<pre><code>glFrustum(-near*centerImageX/(GLfloat)fx, near*(imageWidth-centerImageX)/(GLfloat)fx, near*(centerImageY-imageHeight)/(GLfloat)fy, near*centerImageY/(GLfloat)fy, near, far);
</code></pre>
<p>This is ok, I get the good perspective and my 3D object is well inserted on my photo. Now I would like to be able to zoom in/out. Normally, it's done by changing fov in gluPerspective but I don't use gluPerspective because I can't get a good insertion.</p>
<p>With <a href="http://dmi.uib.es/~josemaria/files/OpenGLFAQ/transformations.htm" rel="nofollow">http://dmi.uib.es/~josemaria/files/OpenGLFAQ/transformations.htm</a>, question "9.085 How can I make a call to glFrustum() that matches my call to gluPerspective()?", I tried with :</p>
<pre><code>fov=360.0*atan(imageHeight/(2*fy))/Pi; //computed with parameters top and bottom of my glFrustum
aspect=-centerImageY*fx/(fy*(centerImageX-imageWidth)); //computed with parameters left and bottom of my glFrustum
</code></pre>
<p>void glFrustum(GLdouble left, GLdouble right, GLdouble bottom, GLdouble top, GLdouble near, GLdouble far);</p>
<p>void gluPerspective( GLdouble fovy, GLdouble aspect,GLdouble near, GLdouble far );</p>
<p>And then :</p>
<pre><code>gluPerspective(fov, aspect, near, far);
</code></pre>
<p>But my object is distorted, it's not scaled correctly on all axes, ratio is not kept.</p>
<p>So what do I need to do / modify in my glFrustum parameters to get a zoom in / out effect ?</p> | 2013-12-18 10:48:34.417000+00:00 | 2020-02-15 05:09:32.193000+00:00 | 2013-12-18 10:54:53.020000+00:00 | opengl|glut|augmented-reality|perspectivecamera|frustum | ['https://i.stack.imgur.com/CUVlA.png', 'https://arxiv.org/abs/1307.0998'] | 2 |
13,857,219 | <p>How about the pathological case of the two-moons dataset? unsupervised k-means will fail badly. A high quality method I am aware of employs a more probabilistic approach using mutual information and combinatorial optimization. Basically you cast the clustering problem as the problem of finding the optimal [cluster] subset of the full point-set for the case of two clusters.</p>
<p>You can find the <a href="http://arxiv.org/pdf/1111.6453v1.pdf" rel="nofollow">relevant paper here</a> (page 42) and the corresponding <a href="http://www.di.ens.fr/~fbach/submodular/" rel="nofollow">Matlab code here</a> to play with (checkout the two-moons case). If you are interested in a C++ high-performance implementation of that with a speed up of >30x then you can find it <a href="http://hpsfo.bitbucket.org" rel="nofollow">here HPSFO</a>.</p> | 2012-12-13 09:55:54.573000+00:00 | 2012-12-13 10:02:43.097000+00:00 | 2012-12-13 10:02:43.097000+00:00 | null | 13,854,492 | <p>I see that for k-means, we have Lloyd's algorithm, Elkan's algorithm, and we also have hierarchical version of k-means. </p>
<p>For all these algorithms, I see that Elkan's algorithm could provide a boost in term of speed. But what I want to know, is the quality from all these k-means algorithms. Each time, we run these algorithms, the result would be different, due to their heuristic and probabilistic nature. Now, my question is, when it comes to clustering algorithm like k-means, if we want to have a better quality result (as in lesser distortion, etc.) between all these k-means algorithms, which algorithm would be able to give you the better quality? Is it possible to measure such thing?</p> | 2012-12-13 06:47:09.240000+00:00 | 2012-12-14 16:55:18.700000+00:00 | 2012-12-13 08:42:21.333000+00:00 | algorithm|machine-learning|k-means | ['http://arxiv.org/pdf/1111.6453v1.pdf', 'http://www.di.ens.fr/~fbach/submodular/', 'http://hpsfo.bitbucket.org'] | 3 |
59,679,083 | <p>Recently, there were also two papers using bi-/multilingual word/contextual embeddings to do the word alignment. Both of them construct a bipartite graph where the words are weighted with their embedding distances and use graph algorithms to get the alignment.</p>
<p><a href="https://openreview.net/pdf?id=SFIxZr3RRpr" rel="nofollow noreferrer">One paper</a> does a maximum matching between the graph parts. Because the matching is not symmetrical, they do it from both sides and use similar symmetrization heuristics as FastAlign.</p>
<p><a href="https://arxiv.org/abs/1911.03310" rel="nofollow noreferrer">The other one</a> mentions the alignment only briefly uses minimum-weighted edge cover on the graph and uses it as the alignment.</p>
<p>Both of them claim to be better than FastAlign.</p> | 2020-01-10 09:43:53.057000+00:00 | 2020-01-10 09:43:53.057000+00:00 | null | null | 59,615,759 | <p>I am working on a project for building a high precision word alignment between sentences and their translations in other languages, for measuring translation quality. I am aware of Giza++ and other word alignment tools that are used as part of the pipeline for Statistical Machine Translation, but this is not what I'm looking for. I'm looking for an algorithm that can map words from the source sentence into the corresponding words in the target sentence, transparently and accurately given these restrictions:</p>
<ul>
<li>the two languages do not have the same word order, and the order keeps changing</li>
<li>some words in the source sentence do not have corresponding words in the target sentence, and vice versa</li>
<li>sometimes a word in the source correspond to multiple words in the target, and vice versa, and there can be many-to-many mapping</li>
<li>there can be sentences where the same word is used multiple times in the sentence, so the alignment needs to be done with the words and their indexes, not only words</li>
</ul>
<p>Here is what I did:</p>
<ul>
<li>Start with a list of sentence pairs, say English-German, with each sentence tokenized to words</li>
<li>Index all words in each sentence, and create an inverted index for each word (e.g. the word "world" occurred in sentences # 5, 16, 19, 26 ... etc), for both source and target words</li>
<li>Now this inverted index can predict the correlation between any source word and any target word, as the intersection between the two words divided by their union. For example, if the tagret word "Welt" occurs in sentences 5, 16, 26,32, The correlation between (world, Welt) is the number of indexes in the intersection (3) divided by the number of indexes in the union (5), and hence the correlation is 0.6. Using the union gives lower correlation with high frequency words, such as "the", and the corresponding words in other languages</li>
<li>Iterate over all sentence pairs again, and use the indexes for the source and target words for a given sentence pairs to create a correlation matrix </li>
</ul>
<p>Here is an example of a correlation matrix between an English and a German sentence. We can see the challenges discussed above. </p>
<p><a href="https://i.stack.imgur.com/gGM2f.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gGM2f.png" alt="An example of the alignment between an English and German sentence, showing the correlations between words, and the green cells are the correct alignment points that should be identified by the word-alignment algorithm"></a></p>
<p>In the image, there is an example of the alignment between an English and German sentence, showing the correlations between words, and the green cells are the correct alignment points that should be identified by the word-alignment algorithm.</p>
<p>Here is some of what I tried:</p>
<ul>
<li>It is possible in some cases that the intended alignment is simply the word pair with the highest correlation in its respective column and row, but in many cases it's not. </li>
<li>I have tried things like Dijkstra's algorithm to draw a path connecting the alignment points, but it doesn't seem to work this way, because it seems you can jump back and forth to earlier words in the sentence because of the word order, and there is no sensible way to skip words for which there is no alignment. </li>
<li>I think the optimum solution will involve something
like expanding rectangles which start from the most likely
correspondences, and span many-to-many correspondences, and skip
words with no alignment, but I'm not exactly sure what would be a
good way to implement this</li>
</ul>
<p>Here is the code I am using:</p>
<pre><code>import random
src_words=["I","know","this"]
trg_words=["Ich","kenne","das"]
def match_indexes(word1,word2):
return random.random() #adjust this to get the actual correlation value
all_pairs_vals=[] #list for all the source (src) and taget (trg) indexes and the corresponding correlation values
for i in range(len(src_words)): #iterate over src indexes
src_word=src_words[i] #identify the correponding src word
for j in range(len(trg_words)): #iterate over trg indexes
trg_word=trg_words[j] #identify the correponding trg word
val=match_indexes(src_word,trg_word) #get the matching value from the inverted indexes of each word (or from the data provided in the speadsheet)
all_pairs_vals.append((i,j,val)) #add the sentence indexes for scr and trg, and the corresponding val
all_pairs_vals.sort(key=lambda x:-x[-1]) #sort the list in descending order, to get the pairs with the highest correlation first
selected_alignments=[]
used_i,used_j=[],[] #exclude the used rows and column indexes
for i0,j0,val0 in all_pairs_vals:
if i0 in used_i: continue #if the current column index i0 has been used before, exclude current pair-value
if j0 in used_j: continue #same if the current row was used before
selected_alignments.append((i0,j0)) #otherwise, add the current pair to the final alignment point selection
used_i.append(i0) #and include it in the used row and column indexes so that it will not be used again
used_j.append(j0)
for a in all_pairs_vals: #list all pairs and indicate which ones were selected
i0,j0,val0=a
if (i0,j0) in selected_alignments: print(a, "<<<<")
else: print(a)
</code></pre>
<p>It's problematic because it doesn't accomodate the many-to-many, or even the one to many alignments, and can err easily in the beginning by selecting a wrong pair with highest correlation, excluding its row and column from future selection. A good algorithm would factor in that a certain pair has the highest correlation in its respective row/column, but would also consider the proximity to other pairs with high correlations.</p>
<p>Here is some data to try if you like, it's in Google sheets:
<a href="https://docs.google.com/spreadsheets/d/1-eO47RH6SLwtYxnYygow1mvbqwMWVqSoAhW64aZrubo/edit?usp=sharing" rel="noreferrer">https://docs.google.com/spreadsheets/d/1-eO47RH6SLwtYxnYygow1mvbqwMWVqSoAhW64aZrubo/edit?usp=sharing</a></p> | 2020-01-06 16:36:18.150000+00:00 | 2021-09-25 10:07:39.687000+00:00 | 2020-01-06 20:27:15.713000+00:00 | python|machine-translation | ['https://openreview.net/pdf?id=SFIxZr3RRpr', 'https://arxiv.org/abs/1911.03310'] | 2 |
38,216,653 | <p>If interested in online learning with concept drift then here is some previous work</p>
<ol>
<li><p>Learning under Concept Drift: an Overview
<a href="https://arxiv.org/pdf/1010.4784.pdf" rel="nofollow">https://arxiv.org/pdf/1010.4784.pdf</a></p></li>
<li><p>The problem of concept drift: definitions and related work
<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.58.9085&rep=rep1&type=pdf" rel="nofollow">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.58.9085&rep=rep1&type=pdf</a></p></li>
<li><p>A Survey on Concept Drift Adaptation
<a href="http://www.win.tue.nl/~mpechen/publications/pubs/Gama_ACMCS_AdaptationCD_accepted.pdf" rel="nofollow">http://www.win.tue.nl/~mpechen/publications/pubs/Gama_ACMCS_AdaptationCD_accepted.pdf</a></p></li>
<li><p>MOA Concept Drift Active Learning Strategies for Streaming Data
<a href="http://videolectures.net/wapa2011_bifet_moa/" rel="nofollow">http://videolectures.net/wapa2011_bifet_moa/</a></p></li>
<li><p>A Stream of Algorithms for Concept Drift
<a href="http://people.cs.georgetown.edu/~maloof/pubs/maloof.heilbronn12.handout.pdf" rel="nofollow">http://people.cs.georgetown.edu/~maloof/pubs/maloof.heilbronn12.handout.pdf</a></p></li>
<li><p>MINING DATA STREAMS WITH CONCEPT DRIFT
<a href="http://www.cs.put.poznan.pl/dbrzezinski/publications/ConceptDrift.pdf" rel="nofollow">http://www.cs.put.poznan.pl/dbrzezinski/publications/ConceptDrift.pdf</a></p></li>
<li><p>Analyzing time series data with stream processing and machine learning
<a href="http://www.ibmbigdatahub.com/blog/analyzing-time-series-data-stream-processing-and-machine-learning" rel="nofollow">http://www.ibmbigdatahub.com/blog/analyzing-time-series-data-stream-processing-and-machine-learning</a></p></li>
</ol> | 2016-07-06 05:07:24.250000+00:00 | 2016-07-06 05:07:24.250000+00:00 | null | null | 23,056,460 | <p>I am currently in the process of designing a recommender system for text articles (a binary case of 'interesting' or 'not interesting'). One of my specifications is that it should continuously update to changing trends. </p>
<p>From what I can tell, the best way to do this is to make use of machine learning algorithm that supports incremental/<a href="http://en.wikipedia.org/wiki/Online%5fmachine%5flearning">online learning</a>. </p>
<p>Algorithms like the Perceptron and Winnow support online learning but I am not completely certain about Support Vector Machines. Does the scikit-learn python library support online learning and if so, is a support vector machine one of the algorithms that can make use of it?</p>
<p>I am obviously not completely tied down to using support vector machines, but they are usually the go to algorithm for binary classification due to their all round performance. I would be willing to change to whatever fits best in the end.</p> | 2014-04-14 09:24:16.657000+00:00 | 2017-06-03 02:46:45.770000+00:00 | 2014-04-14 09:30:30.737000+00:00 | python|machine-learning|scikit-learn|svm | ['https://arxiv.org/pdf/1010.4784.pdf', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.58.9085&rep=rep1&type=pdf', 'http://www.win.tue.nl/~mpechen/publications/pubs/Gama_ACMCS_AdaptationCD_accepted.pdf', 'http://videolectures.net/wapa2011_bifet_moa/', 'http://people.cs.georgetown.edu/~maloof/pubs/maloof.heilbronn12.handout.pdf', 'http://www.cs.put.poznan.pl/dbrzezinski/publications/ConceptDrift.pdf', 'http://www.ibmbigdatahub.com/blog/analyzing-time-series-data-stream-processing-and-machine-learning'] | 7 |
61,358,140 | <p>NYTimes has changed it's internal html structure since 2014. Newspaper3K will work fine if you try to parse articles published before 2014.</p>
<p>Other things to take into account: </p>
<ul>
<li>1980 articles are not available. </li>
<li>Articles before 1970 are not digitized (except 1964).</li>
<li>1970-1979 articles have lots of words splitted in the middle by a space.</li>
<li>If you parse with Newspaper3k several articles will contain only "NYTimes.com no longer supports Internet Explorer 9 or earlier. Please upgrade your browser."</li>
<li>Lot of articles will have the following texts inserted in the middle:</li>
</ul>
<p>"\n\nNewsletter Sign Up Continue reading the main story Sign Up for the Opinion Today Newsletter Every weekday, get thought-provoking commentary from Op-Ed columnists, the Times editorial board and contributing writers from around the world. Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. Sign Up You will receive emails containing news content , updates and promotions from The New York Times. You may opt-out at any time. You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. View all New York Times newsletters.\n\n"</p>
<p>"\n\nNewsletter Sign Up Continue reading the main story Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. Sign Up You will receive emails containing news content , updates and promotions from The New York Times. You may opt-out at any time. You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. View all New York Times newsletters.\n"</p>
<ul>
<li>Most blogs (blogs appear in 2010) will have also undesired texts inserted.</li>
</ul>
<p>If you are ok with data from 1990 to 2016 check the dataset used in this paper: <a href="https://arxiv.org/abs/1703.00607" rel="nofollow noreferrer">https://arxiv.org/abs/1703.00607</a> it's available online.</p>
<p>In case you need newer articles I thing you should write your own parser. I'm working on it but I didn't finished yet.</p> | 2020-04-22 05:51:22.167000+00:00 | 2020-04-22 05:51:22.167000+00:00 | null | null | 54,916,077 | <p>I am trying to parse from a set of links generated by using the python library called <a href="https://github.com/codelucas/newspaper" rel="nofollow noreferrer">Newspaper</a></p>
<h1>Goal:</h1>
<p>To parse every link from the main page (or specific page such as category) of a news site.</p>
<h1>Problem:</h1>
<ol>
<li>I generate an AttributeError when attempting to pass an 'article_link' into the 'Article()' method.</li>
<li>Using separate code to parse a single link from 'The New York Times', the text printed does not print the whole article.</li>
</ol>
<h1>Code Producing Problem 1:</h1>
<pre><code>import newspaper
from newspaper import Article
nyt_paper = newspaper.build(
'http://nytimes.com/section/todayspaper', memoize_articles=False)
print(nyt_paper.size())
processed_link_list = []
for article_link in nyt_paper.articles:
article = Article(url=article_link)
article.download()
article.html
article.parse()
print(article.authors)
processed_link_list.append(article_link)
if len(nyt_paper.size()) is len(processed_link_list):
print('All Links Processed')
else:
print('All Links **NOT** Processed')
</code></pre>
<h1>Error Output:</h1>
<pre><code>Traceback (most recent call last):
File "nyt_today.py", line 31, in <module>
article = Article(url=article_link)
File "C:\...\lib\site-packages\newspaper\article.py", line 60, in __init__
scheme = urls.get_scheme(url)
File "C:\...\lib\site-packages\newspaper\urls.py", line 279, in get_scheme
return urlparse(abs_url, **kwargs).scheme
File "C:\...\lib\urllib\parse.py", line 367, in urlparse
url, scheme, _coerce_result = _coerce_args(url, scheme)
File "C:\...\lib\urllib\parse.py", line 123, in _coerce_args
return _decode_args(args) + (_encode_result,)
File "C:\...\lib\urllib\parse.py", line 107, in _decode_args
return tuple(x.decode(encoding, errors) if x else '' for x in args)
File "C:\...\lib\urllib\parse.py", line 107, in <genexpr>
return tuple(x.decode(encoding, errors) if x else '' for x in args)
AttributeError: 'Article' object has no attribute 'decode'
</code></pre>
<h1>Code Producing Problem 2:</h1>
<pre><code>from newspaper import Article
from newspaper import fulltext
import requests
nyt_url = 'https://www.nytimes.com/2019/02/26/opinion/trump-kim-vietnam.html'
article = Article(nyt_url)
article.download()
print(article.html)
article.parse()
print(article.authors)
print(article.text)
</code></pre>
<p>I have also tried this 'fulltext' method exampled in the documentation to print the text:</p>
<pre><code>article_html = requests.get(nyt_url).text
full_text = fulltext(article_html)
print(full_text)
</code></pre>
<p>However, although the <em>Entire</em> article text is ouput to the</p>
<pre><code>print(article.html)
</code></pre>
<p>the </p>
<pre><code>print(article.text)
</code></pre>
<p>does not print it all. The original link, HTML Output and Printed Text Output can be seen below:</p>
<p>Link: <a href="https://www.nytimes.com/2019/02/26/opinion/trump-kim-vietnam.html" rel="nofollow noreferrer">https://www.nytimes.com/2019/02/26/opinion/trump-kim-vietnam.html</a> </p>
<p>Html Output: <a href="https://pastebin.com/A5tFdKap" rel="nofollow noreferrer">see this pastebin for truncated output</a></p>
<p>Printed text: <a href="https://pastebin.com/sHiyb3XU" rel="nofollow noreferrer">see this printed text does not print the entire article</a></p>
<p>Any help would be much appreciated.</p> | 2019-02-27 23:25:29.770000+00:00 | 2020-04-22 05:51:22.167000+00:00 | null | python-3.x|web-scraping|python-newspaper | ['https://arxiv.org/abs/1703.00607'] | 1 |
73,354,344 | <p>Multi-level logic synthesis has been a research topic for decades. To apply a sequence of local transforms for circuit improvement is a strategy pursued by the <a href="http://www.eecs.berkeley.edu/%7Ealanmi/abc/" rel="nofollow noreferrer">Berkeley ABC system</a>.</p>
<p>A related paper is <a href="https://people.eecs.berkeley.edu/%7Ealanmi/publications/2007/tech07_imfs.pdf" rel="nofollow noreferrer">SAT-Based Logic Optimization and Resynthesis</a>.
A recent publication is <a href="https://arxiv.org/pdf/2105.01755" rel="nofollow noreferrer">Reinforcement Learning for Scalable Logic
Optimization with Graph Neural Networks</a>.</p>
<p>Usually, the local transforms start out with a correct circuit and try to improve it without touching its correctness. To transform some arbitrary circuit into a correct one does not look promising to me.</p>
<p>The truth-table of a circuit with <code>n</code> inputs has <code>2^n</code> rows. To check the correctness, the optimizer has to check all <code>2^n</code> values. The number of matches (= the fitness measure) is between <code>0</code> and <code>2^n</code>. There are typically many possible ways to transform the circuits. Therefore, the tree of alternatives quickly explodes for more than a handful inputs.</p>
<p>A possible search approach would be to decompose the function <code>F</code> to be implemented into two simpler functions <code>F1</code> and <code>F2</code> such that</p>
<pre><code>F(a, b, ....) = NOR(F1(a, b, ....), F2(a, b, ....))
</code></pre>
<p>This split can be optimized to minimize the complexity of the sub-functions <code>F1</code> and <code>F2</code>.</p>
<p>The approach is recursive. <code>F1</code> and <code>F2</code> are decomposed into sub-functions. The recursion ends, if a function just represents a constant or a single input variable.</p>
<p>The resulting circuit is a tree of two-input <code>NOR</code> gates. It might be possible to re-use already synthesized sub-functions or allow <code>NOR</code> gates with varying number of inputs (<code>INV</code>, <code>NOR2</code>, <code>NOR3</code>, ...).</p> | 2022-08-14 18:53:42.900000+00:00 | 2022-08-19 21:43:59.530000+00:00 | 2022-08-19 21:43:59.530000+00:00 | null | 73,353,703 | <p>I'm developing a program that tries to build logic circuit from NOR gates given some fitness function (logic_circuit => fitness_value).</p>
<p>currently I'm using some kind of simulated annealing(make N random mutations to logic circuit graph and calculate fitness of new solution if fitness high enough than accept as current solution)</p>
<p>is there more efficient algorithms for this kind of problem?</p>
<p><a href="https://i.stack.imgur.com/3qkBa.png" rel="nofollow noreferrer">Here is an example of generated AND gate with 4 inputs</a>(0-1 on edge means, the edge goes from 0 output go to the 1th(right) input of another gate)</p>
<p><a href="https://i.stack.imgur.com/s8I0u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s8I0u.png" alt="enter image description here" /></a></p> | 2022-08-14 17:30:11.727000+00:00 | 2022-08-19 21:43:59.530000+00:00 | 2022-08-14 18:20:14.977000+00:00 | machine-learning|optimization|logic|circuit | ['http://www.eecs.berkeley.edu/%7Ealanmi/abc/', 'https://people.eecs.berkeley.edu/%7Ealanmi/publications/2007/tech07_imfs.pdf', 'https://arxiv.org/pdf/2105.01755'] | 3 |
34,163,681 | <p>"how many images per class should be provided at a minimum?"</p>
<p>Depends how you train.</p>
<p>If training a new model from scratch, purely supervised: For a rule of thumb on the number of images, you can look at the MNIST and CIFAR tasks. These seem to work OK with about 5,000 images per class. That's if you're training from scratch.</p>
<p>You can probably bootstrap your network by beginning with a model trained on ImageNet. This model will already have good features, so it should be able to learn to classify new categories without as many labeled examples. I don't think this is well-studied enough to tell you a specific number.</p>
<p>If training with unlabeled data, maybe only 100 labeled images per class. There is a lot of recent research work on this topic, though not scaling to as large of tasks as Imagenet.
Simple to implement:</p>
<pre><code>http://arxiv.org/abs/1507.00677
</code></pre>
<p>Complicated to implement:</p>
<pre><code>http://arxiv.org/abs/1507.02672
http://arxiv.org/abs/1511.06390
http://arxiv.org/abs/1511.06440
</code></pre>
<p>"do we need to appx. provide the same amount of training images per class or can the amount per class be disparate?"</p>
<p>It should work with different numbers of examples per class.</p>
<p>"what is the impact of wrong image data in the training data? E.g. 500 images of a tennis shoe and 50 of other shoes."</p>
<p>You should use the label smoothing technique described in this paper:</p>
<pre><code>http://arxiv.org/abs/1512.00567
</code></pre>
<p>Smooth the labels based on your estimate of the label error rate.</p>
<p>"is it possible to train a classifier with much more classes than the recently published inception-v3 model? Let's say: 30.000."</p>
<p>Yes</p> | 2015-12-08 18:49:22.673000+00:00 | 2015-12-08 18:49:22.673000+00:00 | null | null | 34,162,549 | <p>We are planning to build image classifiers using Google Tensorflow.</p>
<p>I wonder what are the minimum and what are the optimum requirements to train a custom image classifier using a convolutional deep neural network?</p>
<p>The questions are specifically:</p>
<ul>
<li>how many images per class should be provided at a minimum?</li>
<li>do we need to appx. provide the same amount of training images per class or can the amount per class be disparate?</li>
<li>what is the impact of wrong image data in the training data? E.g. 500 images of a tennis shoe and 50 of other shoes. </li>
<li>is it possible to train a classifier with much more classes than the recently published inception-v3 model? Let's say: 30.000.</li>
</ul> | 2015-12-08 17:45:35.320000+00:00 | 2015-12-08 18:49:22.673000+00:00 | null | machine-learning|computer-vision|neural-network|classification|tensorflow | [] | 0 |
62,163,789 | <p>One possible approach is to treat this as a verification problem instead of multi-classification. That is, train a binary classifier for each person. You can also consult this paper: <a href="https://arxiv.org/abs/1503.03832" rel="nofollow noreferrer">https://arxiv.org/abs/1503.03832</a></p> | 2020-06-03 01:22:32.617000+00:00 | 2020-06-03 01:22:32.617000+00:00 | null | null | 59,303,313 | <p>Let us say I have a training dataset of 10 million images containing images of 100,000 different people. I want to create an ML model that can identify which person is in a given image.
What would be the best approach considering the huge number of people(classes) ?</p> | 2019-12-12 11:14:07.833000+00:00 | 2022-04-03 05:47:11.173000+00:00 | null | machine-learning|neural-network | ['https://arxiv.org/abs/1503.03832'] | 1 |
49,626,777 | <p><a href="https://arxiv.org/abs/1506.06726" rel="nofollow noreferrer">Skip-thought vectors</a> are a system for predicting sentences from a context, by essentially constructing sentence-wide vectors. Might be useful, especially so in combination with context2vec if you want to build a custom model.</p> | 2018-04-03 09:36:06.513000+00:00 | 2018-04-03 09:36:06.513000+00:00 | null | null | 49,624,736 | <p>I am a sort of newbie to NLP world.
But anyway, I have just started my NLP project.</p>
<p>My task is about inferring hidden sentence in a paragraph.
Let me show you an example question.</p>
<p><a href="https://i.stack.imgur.com/yOto1.png" rel="nofollow noreferrer">a multiple choice question about inferring a clause in the blank</a>
I want my machine learning model to extract some meaningful phrase from the given text(in above image, a paragraph)</p>
<p>I know that my question sounds quite ambiguous for you all. I just want to know even a small clue.</p>
<p>Thank you for your response in advance.</p> | 2018-04-03 07:40:08.003000+00:00 | 2018-04-03 13:10:07.107000+00:00 | 2018-04-03 13:10:07.107000+00:00 | machine-learning|nlp|deep-learning|word2vec | ['https://arxiv.org/abs/1506.06726'] | 1 |
70,783,992 | <p>This is a very good question, and shows a common misconception about Transformers, stemming from an (unfortunate) formulation in the <a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">original Transformers paper</a>. In particular, the authors write the following in Section 3.2.2:</p>
<blockquote>
<p>In this work, we employ <code>h = 8</code> parallel attention layers, or heads. For each of these <strong>we use</strong> <code>d_k = d_v = d_(model) / h = 64</code>. [...]</p>
</blockquote>
<p>Note that the equality of <code>d_k/d_v = d_(model)</code> is not strictly necessary; it is only important that you do match the final hidden representation (<code>d_(model)</code>) <em>after the Feed-Forward portion of each layer</em>. Specifically for <code>mt5-small</code>, the authors actually use an internal dimension of <code>384</code> which is simply the product of parameters <code>d_kv * num_heads = 64 * 6</code>.</p>
<p>Now, the problem is that many libraries make a similar assumption of the enforced relation between <code>d_kv</code> and <code>d_(model)</code>, because it saves some implementation effort that most people won't use anyways. I suspect (not super familiar with AllenNLP) that they have made similar assumptions here, which is why you cannot load the model.</p>
<p>Also, to clarify this, here is a peek at the <code>modules</code> of a loaded <code>mt5-small</code>:</p>
<pre class="lang-py prettyprint-override"><code>T5Block(
(layer): ModuleList(
(0): T5LayerSelfAttention(
(SelfAttention): T5Attention(
(q): Linear(in_features=512, out_features=384, bias=False)
(k): Linear(in_features=512, out_features=384, bias=False)
(v): Linear(in_features=512, out_features=384, bias=False)
(o): Linear(in_features=384, out_features=512, bias=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
(1): T5LayerFF(
(DenseReluDense): T5DenseGatedGeluDense(
(wi_0): Linear(in_features=512, out_features=1024, bias=False)
(wi_1): Linear(in_features=512, out_features=1024, bias=False)
(wo): Linear(in_features=1024, out_features=512, bias=False)
(dropout): Dropout(p=0.1, inplace=False)
)
(layer_norm): T5LayerNorm()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
</code></pre>
<p>You can get the full model layout by simply calling <code>list(model.modules())</code></p> | 2022-01-20 09:48:31.087000+00:00 | 2022-01-20 09:48:31.087000+00:00 | null | null | 70,769,151 | <p>The configuration file for the HuggingFace <strong>google/mt5-small</strong> Model (<a href="https://huggingface.co/google/mt5-small" rel="nofollow noreferrer">https://huggingface.co/google/mt5-small</a>)</p>
<p>defines</p>
<pre><code>{
...
"d_model": 512,
...
"num_heads": 6,
...
}
</code></pre>
<p>Link to the config file: <a href="https://huggingface.co/google/mt5-small/resolve/main/config.json" rel="nofollow noreferrer">https://huggingface.co/google/mt5-small/resolve/main/config.json</a></p>
<p><strong>Question:</strong></p>
<p>As far as I understood, the number of attention-head should be a divider of the model dimension. This is clearly not true in this config file.</p>
<p>Do I misunderstand how self-attention is applied in mT5?</p>
<p>When I use the AllenNLP model (<a href="https://github.com/allenai/allennlp-models/blob/main/allennlp_models/generation/models/t5.py" rel="nofollow noreferrer">https://github.com/allenai/allennlp-models/blob/main/allennlp_models/generation/models/t5.py</a>)
as sequence-to-sequence model, I receive an error message</p>
<p><strong>Summary:</strong></p>
<pre><code>allennlp.common.checks.ConfigurationError: The hidden size (512) is not a multiple of the number of attention heads (6)
</code></pre>
<p><strong>Full</strong></p>
<pre><code>Traceback (most recent call last):
File "/snap/pycharm-professional/269/plugins/python/helpers/pydev/pydevd.py", line 1500, in _exec
runpy._run_module_as_main(module_name, alter_argv=False)
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/__main__.py", line 50, in <module>
run()
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/__main__.py", line 46, in run
main(prog="allennlp")
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/commands/__init__.py", line 123, in main
args.func(args)
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/commands/train.py", line 112, in train_model_from_args
train_model_from_file(
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/commands/train.py", line 178, in train_model_from_file
return train_model(
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/commands/train.py", line 254, in train_model
model = _train_worker(
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/commands/train.py", line 490, in _train_worker
train_loop = TrainModel.from_params(
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/from_params.py", line 652, in from_params
return retyped_subclass.from_params(
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/from_params.py", line 686, in from_params
return constructor_to_call(**kwargs) # type: ignore
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/commands/train.py", line 766, in from_partial_objects
model_ = model.construct(
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/lazy.py", line 82, in construct
return self.constructor(**contructor_kwargs)
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/lazy.py", line 66, in constructor_to_use
return self._constructor.from_params( # type: ignore[union-attr]
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/from_params.py", line 652, in from_params
return retyped_subclass.from_params(
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/from_params.py", line 686, in from_params
return constructor_to_call(**kwargs) # type: ignore
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp_models/generation/models/t5.py", line 32, in __init__
self.t5 = T5Module.from_pretrained_module(
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/modules/transformer/transformer_module.py", line 251, in from_pretrained_module
model = cls._from_config(config, **kwargs)
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/modules/transformer/t5.py", line 852, in _from_config
return cls(
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/modules/transformer/t5.py", line 783, in __init__
self.encoder: T5EncoderStack = encoder.construct(
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/lazy.py", line 82, in construct
return self.constructor(**contructor_kwargs)
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/modules/transformer/t5.py", line 600, in basic_encoder
self_attention=block_self_attention.construct(
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/lazy.py", line 82, in construct
return self.constructor(**contructor_kwargs)
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/lazy.py", line 66, in constructor_to_use
return self._constructor.from_params( # type: ignore[union-attr]
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/common/from_params.py", line 686, in from_params
return constructor_to_call(**kwargs) # type: ignore
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/modules/transformer/attention_module.py", line 471, in __init__
super().__init__(
File "/home/lars/anaconda3/envs/mare2/lib/python3.9/site-packages/allennlp/modules/transformer/attention_module.py", line 91, in __init__
raise ConfigurationError(
allennlp.common.checks.ConfigurationError: The hidden size (512) is not a multiple of the number of attention heads (6)
</code></pre> | 2022-01-19 10:41:54.937000+00:00 | 2022-01-20 09:48:31.087000+00:00 | null | nlp|huggingface-transformers|allennlp | ['https://arxiv.org/pdf/1706.03762.pdf'] | 1 |
6,099,676 | <p>What you're trying to do is <em>online 2D bin packing</em>. It's online because you don't have all your small pictures in hand before you attempt to pack them into the big picture. Furthermore some small pictures will be "deallocated" and their space will be freed up. On the other hand, an offline algorithm allows you to do things like sort your small pictures from largest to smallest before packing them.</p>
<p>Here's an article that surveys the state of the art in 2D packing: <em><a href="http://www.csc.liv.ac.uk/~epa/surveyhtml.html" rel="nofollow noreferrer">Survey on two-dimensional packing</a></em>. It's quite theoretical.</p>
<p>This article <em><a href="http://arxiv.org/abs/0906.0409" rel="nofollow noreferrer">A New Upper Bound on 2D Online Bin Packing</a></em> cites other articles that describe online 2D packing algorithms.</p>
<p>People in the gaming world have a similar problem as you do; they call it <em>texture packing</em> or <em><a href="http://en.wikipedia.org/wiki/Texture_atlas" rel="nofollow noreferrer">texture atlas</a></em>. However, they use offline algorithms.</p>
<p>John Ratcliff posted a <a href="http://codesuppository.blogspot.com/2009/04/texture-packing-code-snippet-to-compute.html" rel="nofollow noreferrer">blog article</a> on texture packing.</p>
<p>See also this related question on gamedev: <a href="https://gamedev.stackexchange.com/questions/2829/texture-packing-algorithm">https://gamedev.stackexchange.com/questions/2829/texture-packing-algorithm</a></p> | 2011-05-23 15:52:46.053000+00:00 | 2011-05-23 17:19:47.763000+00:00 | 2017-04-13 12:18:41.823000+00:00 | null | 6,098,607 | <p>I need algorithm which splits big static sized rectangle to small ones. A perfect implementation for me look like this:</p>
<pre><code>struct RECT
{
int l,t,r,b;
};
class BigRect
{
public:
// width and height of big rect
BigRect( unsigned width, unsigned height );
// returns -1 if rect cannot be allocated, otherwise returns id of found rect
int GetRect( unsigned width, unsigned height, RECT &out );
// returns allocated rect to big rectangle
void FreeRect( int id );
};
void test()
{
BigRect r( 10, 10 );
RECT out;
r.GetRect( 4, 4, out ); // rect found ({0,0,4,4} for example), returns 1
r.GetRect( 5, 5, out ); // rect found ({4,0,9,5} for example), returns 2
r.GetRect( 6, 6, out ); // no place found for rect, returns -1
r.FreeRect( 2 ); // add {4,0,9,5} back to rect
r.GetRect( 6, 6, out ); // rect found (4,0,10,6)
}
</code></pre>
<p>So I need algorithm for <code>GetRect</code> and <code>FreeRect</code> methods. Any ideas and links would be appreciated.</p> | 2011-05-23 14:28:31.083000+00:00 | 2011-06-23 17:50:11.937000+00:00 | 2011-06-23 17:50:11.937000+00:00 | c++|algorithm|space-partitioning | ['http://www.csc.liv.ac.uk/~epa/surveyhtml.html', 'http://arxiv.org/abs/0906.0409', 'http://en.wikipedia.org/wiki/Texture_atlas', 'http://codesuppository.blogspot.com/2009/04/texture-packing-code-snippet-to-compute.html', 'https://gamedev.stackexchange.com/questions/2829/texture-packing-algorithm'] | 5 |
22,722,865 | <p>For <code>dataframe</code> objects you can use <code>HTTP GET</code> and set the <code>dataframe</code> argument:</p>
<pre><code>GET http://localhost:7177/ocpu/tmp/x000a0fb8/json?dataframe=rows
</code></pre>
<p>For example the <code>Boston</code> object from the <code>MASS</code> package is a dataframe as well:</p>
<pre><code>https://cran.ocpu.io/MASS/data/Boston/json?dataframe=columns
https://cran.ocpu.io/MASS/data/Boston/json?dataframe=rows
</code></pre>
<p>For <code>HTTP GET</code> requests to a <code>.../json</code> endpoint, all the http parameters are mapped to arguments in the <code>toJSON</code> function from the <a href="http://cran.r-project.org/package=jsonlite" rel="nofollow">jsonlite package</a>. You can can also specify other <code>toJSON</code> arguments:</p>
<pre><code>https://cran.ocpu.io/MASS/data/Boston/json?dataframe=columns&digits=4
</code></pre>
<p>To see which arguments are available, have a look at the <a href="http://cran.r-project.org/web/packages/jsonlite/jsonlite.pdf" rel="nofollow">jsonlite manual</a> or <a href="https://www.opencpu.org/posts/publishing-data-with-opencpu/" rel="nofollow">this post</a>.</p>
<p>Note that this only works if you do the 2 step procedure: first a <code>HTTP POST</code> on a function that returns a <code>dataframe</code>, followed by retrieving that object in <code>json</code> format with a <code>HTTP GET</code> request. You can not specify <code>toJSON</code> parameters when you do the 1-step shortcut where you fix the <code>POST</code> request with <code>/json</code>, because in <code>POST</code> requests the HTTP parameters always get mapped to the function call.</p>
<p>The reason for this default is that the row based design seems to be the most conventional and interoperable way of encoding tabular data. The <a href="http://arxiv.org/abs/1403.2805" rel="nofollow">jsonlite paper/vignette</a> goes into some more detail. Note that it also works the other way around: you don't have to call the <code>data.frame</code> function to create a dataframe, just posting an argument in the form: </p>
<pre><code>[{"xx":1,"yy":6},{"xx":2,"yy":7},{"xx":3,"yy":8}]
</code></pre>
<p>will automatically turn it into a data frame:</p>
<pre><code>curl https://public.opencpu.org/ocpu/library/base/R/summary/console -d object='[{"xx":1,"yy":6},{"xx":2,"yy":7},{"xx":3,"yy":8}]'
</code></pre> | 2014-03-28 20:45:07.333000+00:00 | 2014-03-28 20:59:52.273000+00:00 | 2014-03-28 20:59:52.273000+00:00 | null | 22,718,874 | <p>Is there a clean way to change the default "/json" postfix option on data.frames to be column-based versus row-based?</p>
<p>Data.frames in R, if I understand correctly, are really just named lists where each list is the same length as the others. Using <code>jsonlite</code>, it's simple to show the difference (trivial example, yes):</p>
<pre><code>library(jsonlite)
ll <- list(xx=1:3, yy=6:8)
dd <- data.frame(xx=1:3, yy=6:8)
toJSON(dd)
# [1] "[ { \"xx\" : 1, \"yy\" : 6 }, { \"xx\" : 2, \"yy\" : 7 }, { \"xx\" : 3, \"yy\" : 8 } ]"
toJSON(ll)
# [1] "{ \"xx\" : [ 1, 2, 3 ], \"yy\" : [ 6, 7, 8 ] }"
toJSON(dd, dataframe='column')
# [1] "{ \"xx\" : [ 1, 2, 3 ], \"yy\" : [ 6, 7, 8 ] }"
toJSON(as.list(dd))
# [1] "{ \"xx\" : [ 1, 2, 3 ], \"yy\" : [ 6, 7, 8 ] }"
</code></pre>
<p>where the last three are identical. It's easy to force it to look the same either by using the <code>dataframe</code> argument to <code>toJSON</code> or by coercing the <code>data.frame</code> into a <code>list</code>.</p>
<p>Using OpenCPU's API, the calls look similar:</p>
<pre><code>$ curl http://localhost:7177/ocpu/library/base/R/list/json -H "Content-Type: application/json" -d '{ "xx":[1,2,3], "yy":[6,7,8] }'
{
"xx" : [
1,
2,
3
],
"yy" : [
6,
7,
8
]
}
$ curl http://localhost:7177/ocpu/library/base/R/data.frame/json -H "Content-Type: application/json" -d '{ "xx":[1,2,3], "yy":[6,7,8] }'
[
{
"xx" : 1,
"yy" : 6
},
{
"xx" : 2,
"yy" : 7
},
{
"xx" : 3,
"yy" : 8
}
]
</code></pre>
<p>If I want the <code>data.frame</code> itself to be JSON-ified column-based then I need to coerce it to a <code>list</code>:</p>
<pre><code>$ curl http://localhost:7177/ocpu/library/base/R/data.frame -H "Content-Type: application/json" -d '{ "xx":[1,2,3], "yy":[6,7,8] }'
/ocpu/tmp/x000a0fb8/R/.val
/ocpu/tmp/x000a0fb8/stdout
/ocpu/tmp/x000a0fb8/source
/ocpu/tmp/x000a0fb8/console
/ocpu/tmp/x000a0fb8/info
$ curl http://localhost:7177/ocpu/library/base/R/as.list/json -d "x=x000a0fb8"
{
"xx" : [
1,
2,
3
],
"yy" : [
6,
7,
8
]
}
</code></pre>
<p>Three questions:</p>
<ol>
<li><p>Is there a way to change the default behavior of the OpenCPU auto-JSON-ification to be column-based?</p></li>
<li><p>Is there a reason (besides "had to default to something") that it defaults to row-based? (So that I can better understand the underpinnings and efficiencies, not meant as a challenge.)</p></li>
<li><p>This is all academic, though, since most (if not all) libraries accepting the JSON output will understand and translate between the formats transparently. Right?</p></li>
</ol>
<p>(Win7 x64, R 3.0.3, opencpu 1.2.3, jsonlite 0.9.4)</p>
<p>(PS: Thanks, Jeroen, OpenCPU is awesome! The more I play, the more I like.)</p> | 2014-03-28 16:56:23.520000+00:00 | 2016-08-28 10:16:44.790000+00:00 | null | json|r|opencpu|jsonlite | ['http://cran.r-project.org/package=jsonlite', 'http://cran.r-project.org/web/packages/jsonlite/jsonlite.pdf', 'https://www.opencpu.org/posts/publishing-data-with-opencpu/', 'http://arxiv.org/abs/1403.2805'] | 4 |
8,466,764 | <p>This paper <a href="http://arxiv.org/abs/0805.1598" rel="nofollow noreferrer">"A Simple In-Place Algorithm for In-Shuffle"</a> describes how to transpose matrix of 2*N and gives a hint how to do it for other cases, so 3*N may be also possible. <a href="https://stackoverflow.com/a/23529396/1009831">This answer to other question</a> shows that it is indeed possible.</p>
<p>Or use an algorithm which writes each value to its transposed place, then does the same for the value from that place, and so on until cycle is connected. Flag processed values in a bit vector. And continue until this vector is all 1s.</p>
<p>Both algorithms are not cache-friendly. Probably some clever use of PREFETCH instruction can improve this.</p>
<p><strong>Edit:</strong></p>
<p>C++, RGB to single planes, not optimized:</p>
<pre><code>#include <iostream>
#include <bitset>
#include <vector>
enum {N = 8};
void transpose(std::vector<char>& a)
{
std::bitset<3*N> b;
for (int i = 1; i < 3*N; ++i)
{
if (b[i])
continue;
int ptr = i;
int next;
char nextVal = a[i];
do {
next = ptr/3 + N*(ptr%3);
char prevVal = nextVal;
nextVal = a[next];
a[next] = prevVal;
ptr = next;
b[ptr] = true;
}
while (ptr != i);
}
}
int main()
{
std::vector<char> a(3*N);
for (int i = 0; i != 3*N; ++i)
a[i] = i;
transpose(a);
for (int i = 0; i != 3*N; ++i)
std::cout << (int)a[i] << std::endl;
return 0;
}
</code></pre>
<p>My original intent is to use a bit vector of size WIDTH<em>HEIGHT, which gives overhead of WIDTH</em>HEIGHT/8. But it is always possible to sacrifice speed for space. The bit vector may be of size WIDTH or HEIGHT or any desirable value, even 0. The trick is to maintain a pointer to the cell, before which all values are transposed. The bit vector is for cells, starting from this pointer. After it is all 1s, It is moved to next position, then all the algorithm steps are performed except actual data movement. And the bit vector is ready to continue transposing. This variant is O(N^2) instead of O(N).</p>
<p><strong>Edit2:</strong></p>
<p>PREFITCH optimization is not difficult to implement: just calculate indexes, invoke PREFETCH, and put indexes to a queue (ringbuffer), then get indexes from the queue and move data.</p>
<p><strong>Edit3:</strong></p>
<p>The idea of other algorithm, which is O(1) size, O(N*log(N)) time, is cache-friendly and may be faster than "cycle" algorithm (for image sizes < 1Gb):</p>
<ul>
<li>Split N*3 matrix to several 3*3 matrices of char and transpose them</li>
<li>Split the result to 3*3 matrices of char[3] and transpose them</li>
<li>Continue while matrices size is less than the array size</li>
<li>Now we have up to 3*2*log3(N) ordered pieces. Join them.</li>
<li>First join pieces of equal size. Very simple "cycles" of length 4 may be used.</li>
<li>Join unequal-sized pieces with reverse(reverse(a), reverse(b))</li>
</ul> | 2011-12-11 19:26:56.807000+00:00 | 2014-05-29 10:45:21.953000+00:00 | 2017-05-23 12:20:12.120000+00:00 | null | 8,465,950 | <p>I've got a flat array of byte RGB values that goes <code>R1 G1 B1 R2 G2 B2 R3 G3 B3 ... Rn Gn Bn</code>. So my data looks like:</p>
<pre><code>char imageData[WIDTH * HEIGHT * 3];
</code></pre>
<p>But I want to pass a WIDTH*HEIGHT array to an existing C library that expects a single plane of this data. That would be a sequence of just the R values (or just the G, or just the B).</p>
<p>It's easy enough to allocate a new array and copy the data (duh). But the images are very large. If it weren't a C library but took some kind of iteration interface to finesse the "slicing" traversal, that would be great. But I can't edit the code I'm calling...it wants a plain old pointer to a block of sequential memory.</p>
<p>HOWEVER I have write access to this array. It is viable to create a routine that would sort it into color planes. I'd also need a reverse transformation that would put it back, but by definition the same method that sorted it into planes could be applied to unsort it.</p>
<p>How efficiently can I (in place) turn this array into <code>R1 R2 R3 ... Rn G1 G2 G3 ... Gn B1 B2 B3 ... Bn</code> and then back again? Any non-naive algorithms?</p> | 2011-12-11 17:26:23.050000+00:00 | 2014-05-29 10:45:21.953000+00:00 | 2011-12-11 17:51:56.980000+00:00 | c++|c|algorithm|image-processing|slice | ['http://arxiv.org/abs/0805.1598', 'https://stackoverflow.com/a/23529396/1009831'] | 2 |
29,390,521 | <p>Talking about Java, <a href="http://nlp.stanford.edu/software/CRF-NER.shtml" rel="nofollow">Stanford NER</a> seems to be the best ceteris paribus.
There is also <a href="http://alias-i.com/lingpipe/demos/tutorial/ne/read-me.html" rel="nofollow">LingPipe</a>, <a href="http://cogcomp.cs.illinois.edu/page/software_view/4" rel="nofollow">Illinois</a> and others, take a look at <a href="http://aclweb.org/aclwiki/index.php?title=Named_entity_recognizers" rel="nofollow">ACL list</a>.</p>
<p>Also consider <a href="http://arxiv.org/ftp/arxiv/papers/1308/1308.0661.pdf" rel="nofollow">this paper</a> for experimental comparison of several NERCs.</p> | 2015-04-01 12:13:17.477000+00:00 | 2015-04-01 12:13:17.477000+00:00 | null | null | 29,390,384 | <p>I have some research project which needs best NER results .
Can anyone please help me with the best NER tools which have a Python library.</p> | 2015-04-01 12:07:04.417000+00:00 | 2016-04-29 19:29:18.200000+00:00 | 2016-04-29 19:29:18.200000+00:00 | python | ['http://nlp.stanford.edu/software/CRF-NER.shtml', 'http://alias-i.com/lingpipe/demos/tutorial/ne/read-me.html', 'http://cogcomp.cs.illinois.edu/page/software_view/4', 'http://aclweb.org/aclwiki/index.php?title=Named_entity_recognizers', 'http://arxiv.org/ftp/arxiv/papers/1308/1308.0661.pdf'] | 5 |
21,180,209 | <p>I think you should try in your scrapy shell,for experimentation.
1. scrapy shell '<a href="http://export.arxiv.org/rss/hep-th/recent" rel="nofollow noreferrer">http://export.arxiv.org/rss/hep-th/recent</a>'</p>
<ol start="2">
<li><p>sel.remove_namespaces()</p></li>
<li><p>a = sel.xpath('//title/text()')</p></li>
</ol>
<p><img src="https://i.stack.imgur.com/Zpmty.jpg" alt="enter image description here"></p> | 2014-01-17 07:29:46.303000+00:00 | 2014-01-17 07:29:46.303000+00:00 | null | null | 8,686,105 | <p>I'm trying to get information from arXiv's page with <a href="http://scrapy.org/" rel="nofollow">scrapy</a> but cannot select "items" from their <a href="http://export.arxiv.org/rss/hep-th/recent" rel="nofollow">xml page</a>:</p>
<pre><code>from scrapy.spider import BaseSpider
from scrapy.selector import XmlXPathSelector
class arXivSpider(BaseSpider):
name = "arxiv"
allowed_domains = ["arxiv.org"]
start_urls = ["http://export.arxiv.org/rss/hep-th/recent"]
def parse(self, response):
xxs = XmlXPathSelector(response)
papers = xxs.select('//item')
print papers
</code></pre>
<p>The item object is pretty straightforward, if I could extract it...</p>
<pre><code><item rdf:about="http://arxiv.org/abs/1112.5754">
<title>blah blah ... blah</title>
<link>http://arxiv.org/abs/1112.5754</link>
<description rdf:parseType="Literal"><p>...</p></description>
<dc:creator>blah, blah blah</dc:creator>
</item>
</code></pre>
<p>The script runs perfectly, it's just <code>papers = []</code> so the spider is not collecting <code>item</code>'s. It may have to do w/ namespaces... </p> | 2011-12-31 03:33:22.470000+00:00 | 2014-01-17 07:29:46.303000+00:00 | 2011-12-31 06:12:01.203000+00:00 | python|xml|screen-scraping|scrapy | ['http://export.arxiv.org/rss/hep-th/recent'] | 1 |
8,686,505 | <blockquote>
<p>It may have to do w/ namespaces...</p>
</blockquote>
<p>Yes it is.</p>
<p>XmlXPathSelector have ability to handle namespaces, by registering them (<a href="http://doc.scrapy.org/en/latest/topics/selectors.html#xmlxpathselector-examples" rel="nofollow">examples in documentation</a>). In your case:</p>
<pre><code>$ scrapy shell http://export.arxiv.org/rss/hep-th/recent
In [1]: xxs.register_namespace('g', 'http://purl.org/rss/1.0/')
In [2]: xxs.namespaces
Out[2]: {'g': 'http://purl.org/rss/1.0/'}
In [3]: xxs.select('//item')
Out[3]: []
In [4]: xxs.select('//g:item')
Out[4]:
[<XmlXPathSelector xpath='//g:item' data=u'<item xmlns="http://purl.org/rss/1.0/" x'>,
<XmlXPathSelector xpath='//g:item' data=u'<item xmlns="http://purl.org/rss/1.0/" x'>,
...
</code></pre> | 2011-12-31 05:28:44.943000+00:00 | 2011-12-31 05:42:13.553000+00:00 | 2011-12-31 05:42:13.553000+00:00 | null | 8,686,105 | <p>I'm trying to get information from arXiv's page with <a href="http://scrapy.org/" rel="nofollow">scrapy</a> but cannot select "items" from their <a href="http://export.arxiv.org/rss/hep-th/recent" rel="nofollow">xml page</a>:</p>
<pre><code>from scrapy.spider import BaseSpider
from scrapy.selector import XmlXPathSelector
class arXivSpider(BaseSpider):
name = "arxiv"
allowed_domains = ["arxiv.org"]
start_urls = ["http://export.arxiv.org/rss/hep-th/recent"]
def parse(self, response):
xxs = XmlXPathSelector(response)
papers = xxs.select('//item')
print papers
</code></pre>
<p>The item object is pretty straightforward, if I could extract it...</p>
<pre><code><item rdf:about="http://arxiv.org/abs/1112.5754">
<title>blah blah ... blah</title>
<link>http://arxiv.org/abs/1112.5754</link>
<description rdf:parseType="Literal"><p>...</p></description>
<dc:creator>blah, blah blah</dc:creator>
</item>
</code></pre>
<p>The script runs perfectly, it's just <code>papers = []</code> so the spider is not collecting <code>item</code>'s. It may have to do w/ namespaces... </p> | 2011-12-31 03:33:22.470000+00:00 | 2014-01-17 07:29:46.303000+00:00 | 2011-12-31 06:12:01.203000+00:00 | python|xml|screen-scraping|scrapy | ['http://doc.scrapy.org/en/latest/topics/selectors.html#xmlxpathselector-examples'] | 1 |
55,170,820 | <p>The problem is that <code>x*x</code> is a very different beast than <code>a*x</code>.</p>
<p>Please note what a usual "neural network" does: it stacks <code>y = f(W*x + b)</code> a few times, never multiplying <code>x</code> with itself. Therefore, you'll never get perfect reconstruction of <code>x*x</code>. Unless you set <code>f(x) = x*x</code> or similar.</p>
<p>What you can get is an approximation in the range of values presented during training (and perhaps a very little bit of extrapolation). Anyway, I'd recommend you to work with a smaller range of values, it will be easier to optimize the problem.</p>
<p>And on a philosophical note: In machine learning, I find it more useful to think of good/bad, rather than correct/wrong. Especially with regression, you cannot get the result "right" unless you have the exact model. In which case there is nothing to learn.</p>
<hr>
<p>There actually are some NN architectures multiplying <code>f(x)</code> with <code>g(x)</code>, most notably <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="noreferrer">LSTMs</a> and <a href="https://arxiv.org/pdf/1505.00387.pdf" rel="noreferrer">Highway networks</a>. But even these have one or both of <code>f(x)</code>, <code>g(s)</code> bounded (by logistic sigmoid or tanh), thus are unable to model <code>x*x</code> fully.</p>
<hr>
<p>Since there is some misunderstanding expressed in comments, let me emphasize a few points:</p>
<ol>
<li>You <strong>can approximate</strong> your data.</li>
<li>To do well in any sense, you do need a <strong>hidden layer</strong>.</li>
<li>But <strong>no more data</strong> is necessary, though if you cover the space, the model will fit more closely, see <a href="https://stackoverflow.com/a/55203161/9703830">desernaut's answer</a>.</li>
</ol>
<p>As an example, here is a result from a model with a single hidden layer of 10 units with tanh activation, trained by SGD with learning rate 1e-3 for 15k iterations to minimize the MSE of your data. Best of five runs:</p>
<p><a href="https://i.stack.imgur.com/3gMZ4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3gMZ4.png" alt="Performance of a simple NN trained on OP's data"></a></p>
<p>Here is the full code to reproduce the result. Unfortunately, I cannot install Keras/TF in my current environment, but I hope that the PyTorch code is accessible :-)</p>
<pre><code>#!/usr/bin/env python
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
X = torch.tensor([range(-10,11)]).float().view(-1, 1)
Y = X*X
model = nn.Sequential(
nn.Linear(1, 10),
nn.Tanh(),
nn.Linear(10, 1)
)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
loss_func = nn.MSELoss()
for _ in range(15000):
optimizer.zero_grad()
pred = model(X)
loss = loss_func(pred, Y)
loss.backward()
optimizer.step()
x = torch.linspace(-12, 12, steps=200).view(-1, 1)
y = model(x)
f = x*x
plt.plot(x.detach().view(-1).numpy(), y.detach().view(-1).numpy(), 'r.', linestyle='None')
plt.plot(x.detach().view(-1).numpy(), f.detach().view(-1).numpy(), 'b')
plt.show()
</code></pre> | 2019-03-14 19:42:56.797000+00:00 | 2019-03-17 10:52:39.890000+00:00 | 2019-03-17 10:52:39.890000+00:00 | null | 55,170,460 | <p>I made a simple module that should figure out the relationship between input and output numbers, in this case, x and x squared. The code in Python:</p>
<pre><code>import numpy as np
import tensorflow as tf
# TensorFlow only log error messages.
tf.logging.set_verbosity(tf.logging.ERROR)
features = np.array([-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8,
9, 10], dtype = float)
labels = np.array([100, 81, 64, 49, 36, 25, 16, 9, 4, 1, 0, 1, 4, 9, 16, 25, 36, 49, 64,
81, 100], dtype = float)
model = tf.keras.Sequential([
tf.keras.layers.Dense(units = 1, input_shape = [1])
])
model.compile(loss = "mean_squared_error", optimizer = tf.keras.optimizers.Adam(0.0001))
model.fit(features, labels, epochs = 50000, verbose = False)
print(model.predict([4, 11, 20]))
</code></pre>
<p>I tried a different number of units, and adding more layers, and even using the <code>relu</code> activation function, but the results were always wrong.
It works with other relationships like x and 2x. What is the problem here?</p> | 2019-03-14 19:20:04.150000+00:00 | 2022-05-06 14:37:33.987000+00:00 | 2022-05-06 14:37:33.987000+00:00 | python|tensorflow|machine-learning|keras|neural-network | ['http://colah.github.io/posts/2015-08-Understanding-LSTMs/', 'https://arxiv.org/pdf/1505.00387.pdf', 'https://stackoverflow.com/a/55203161/9703830', 'https://i.stack.imgur.com/3gMZ4.png'] | 4 |
3,284,096 | <p>First you should aim to make your game turn based at some level for the AI (i.e. you can somehow model it turn based even if it may not be entirely turn based, in RTS you may be able to break discrete intervals of time into turns.) Second, you should determine how much information the AI should work with. That is, if the AI is allowed to cheat and know every move of its opponent (thereby making it stronger) or if it should know less or more. Third, you should define a cost function of a state. The idea being that a higher cost means a worse state for the computer to be in. Fourth you need a move generator, generating all valid states the AI can transition to from a given state (this may be homogeneous [state-independent] or heterogeneous [state-dependent].) </p>
<p>The thing is, the cost function will be greatly influenced by what exactly you define the state to be. The more information you encode in the state the better balanced your AI will be but the more difficult it will be for it to perform, as it will have to search exponentially more for every additional state variable you include (in an exhaustive search.) </p>
<p>If you provide a definition of a state and a cost function your problem transforms to a general problem in AI that can be tackled with any algorithm of your choice.</p>
<p>Here is a summary of what I think would work well:</p>
<ol>
<li><p>Evolutionary algorithms may work well if you put enough effort into them, but they will add a layer of complexity that will create room for bugs amongst other things that can go wrong. They will also require extreme amounts of tweaking of the fitness function etc. I don't have much experience working with these but if they are anything like neural networks (which I believe they are since both are heuristics inspired by biological models) you will quickly find they are fickle and far from consistent. Most importantly, I doubt they add any benefits over the option I describe in 3.</p></li>
<li><p>With the cost function and state defined it would technically be possible for you to apply gradient decent (with the assumption that the state function is differentiable and the domain of the state variables are continuous) however this would probably yield inferior results, since the biggest weakness of gradient descent is getting stuck in local minima. To give an example, this method would be prone to something like attacking the enemy always as soon as possible because there is a non-zero chance of annihilating them. Clearly, this may not be desirable behaviour for a game, however, gradient decent is a greedy method and doesn't know better.</p></li>
<li><p>This option would be my most highest recommended one: simulated annealing. Simulated annealing would (IMHO) have all the benefits of 1. without the added complexity while being much more robust than 2. In essence SA is just a random walk amongst the states. So in addition to the cost and states you will have to define a way to randomly transition between states. SA is also not prone to be stuck in local minima, while producing very good results quite consistently. The only tweaking required with SA would be the cooling schedule--which decides how fast SA will converge. The greatest advantage of SA I find is that it is conceptually simple and produces superior results empirically to most other methods I have tried. Information on SA can be found <a href="http://en.wikipedia.org/wiki/Simulated_annealing" rel="nofollow noreferrer">here</a> with a long list of generic implementations at the bottom.</p></li>
</ol>
<p>3b. (<em>Edit Added much later</em>) SA and the techniques I listed above are general AI techniques and not really specialized to AI for games. In general, the more specialized the algorithm the more chance it has at performing better. See No Free Lunch Theorem <a href="https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization" rel="nofollow noreferrer">2</a>. Another extension of 3 is something called parallel tempering which dramatically improves the performance of SA by helping it avoid local optima. Some of the original papers on parallel tempering are quite dated <a href="https://arxiv.org/pdf/physics/9710041.pdf" rel="nofollow noreferrer">3</a>, but others have been updated<a href="https://arxiv.org/pdf/cond-mat/0407273.pdf" rel="nofollow noreferrer">4</a>.</p>
<p>Regardless of what method you choose in the end, its going to be very important to break your problem down into states and a cost function as I said earlier. As a rule of thumb I would start with 20-50 state variables as your state search space is exponential in the number of these variables.</p> | 2010-07-19 18:58:41.980000+00:00 | 2017-07-17 21:46:30.977000+00:00 | 2017-07-17 21:46:30.977000+00:00 | null | 3,275,174 | <p>I'm designing a realtime strategy wargame where the AI will be responsible for controlling a large number of units (possibly 1000+) on a large hexagonal map.</p>
<p>A unit has a number of action points which can be expended on movement, attacking enemy units or various special actions (e.g. building new units). For example, a tank with 5 action points could spend 3 on movement then 2 in firing on an enemy within range. Different units have different costs for different actions etc.</p>
<p>Some additional notes:</p>
<ul>
<li>The output of the AI is a "command" to any given unit</li>
<li>Action points are allocated at the beginning of a time period, but may be spent at any point within the time period (this is to allow for realtime multiplayer games). Hence "do nothing and save action points for later" is a potentially valid tactic (e.g. a gun turret that cannot move waiting for an enemy to come within firing range)</li>
<li>The game is updating in realtime, but the AI can get a consistent snapshot of the game state at any time (thanks to the game state being one of Clojure's persistent data structures)</li>
<li>I'm not expecting "optimal" behaviour, just something that is not obviously stupid and provides reasonable fun/challenge to play against</li>
</ul>
<p>What can you recommend in terms of specific algorithms/approaches that would allow for the right balance between efficiency and reasonably intelligent behaviour? </p> | 2010-07-18 10:42:33.387000+00:00 | 2017-07-17 21:46:30.977000+00:00 | 2010-07-18 11:25:11.953000+00:00 | algorithm|artificial-intelligence | ['http://en.wikipedia.org/wiki/Simulated_annealing', 'https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization', 'https://arxiv.org/pdf/physics/9710041.pdf', 'https://arxiv.org/pdf/cond-mat/0407273.pdf'] | 4 |
38,450,292 | <p>have a look at <a href="https://stackoverflow.com/questions/37889914/neural-net-projection-layers-for-word-representation">this answer here</a></p>
<p>It explains the difference between the hidden layer and the projection layer.</p>
<p>Referring to this <a href="http://mi.eng.cam.ac.uk/~gwb24/publications/mphil.thesis.pdf" rel="nofollow">thesis</a></p>
<p>Also, do read <a href="http://arxiv.org/pdf/1301.3781v3.pdf" rel="nofollow">this paper</a> by Tomas Mikolov and go through <a href="http://www.coling-2014.org/COLING%202014%20Tutorial-fix%20-%20Tomas%20Mikolov.pdf" rel="nofollow">this tutorial</a>.</p>
<p>this will really improve your understanding.</p>
<p>Hope this helps!</p> | 2016-07-19 05:37:19.033000+00:00 | 2016-07-19 05:43:37.897000+00:00 | 2016-07-19 05:43:37.897000+00:00 | null | 38,436,939 | <p>I'm trying to write code for <a href="http://jmlr.org/papers/volume3/bengio03a/bengio03a.pdf" rel="nofollow">A Neural Probabilistic Language Model by yoshua Bengio, 2003</a>, but I'm not able to understand the connections between the input layer and projection matrix and between projection matrix and hidden layer. I'm not able to get how exactly is the learning for word-vector representation taking place.</p> | 2016-07-18 12:38:30.250000+00:00 | 2016-07-19 07:23:33.487000+00:00 | 2016-07-19 07:23:33.487000+00:00 | nlp|deep-learning|language-model | ['https://stackoverflow.com/questions/37889914/neural-net-projection-layers-for-word-representation', 'http://mi.eng.cam.ac.uk/~gwb24/publications/mphil.thesis.pdf', 'http://arxiv.org/pdf/1301.3781v3.pdf', 'http://www.coling-2014.org/COLING%202014%20Tutorial-fix%20-%20Tomas%20Mikolov.pdf'] | 4 |
53,428,297 | <p><strong>Decision-making (DM)</strong> is a topic widely studied by different fields of science. (Cognitive sciences, Neurosciences, Computer sciences, Behavioral Economics, Operations research, etc. *1)</p>
<p>However, DM problems are varied and the computational approach to address that problem will vary accordingly. For instance:</p>
<p>If you have to make a frequent decision that affects the previous one you are dealing with a sequential DM problem. In those cases, <strong>reinforcement learning</strong> *2 or <strong>deep reinforcement learning</strong> *3 can be used to tackle this problem. Examples of these problems can be seen in video-games where the game AI needs to take different actions (policies) over time to maximise its score. (reward)</p>
<p>If you the problem is not sequential but you deal with multiple criteria to find the most attractive alternative then you are dealing with a multi-criteria decision-making problem, topic widely researched in operations research. There are some typically-used algorithms that are utilised to assist human-decision making like <strong>AHP*4, TOPSIS*5, ELECTRE*6, PROMETREE*7</strong>. An example of MCDC is selecting a house to buy, where you have to consider location, price among other desirable or undesirable characteristics.</p>
<p>Depending on the level of uncertainty, subjective data and incomplete information of the problem you might require to use fuzzy, intuitionistic or neutrosophic variations of the mentioned algorithms. *8</p>
<p>You might need to optimise DM through different competing goals. In that case, you are dealing with a multi-objective decision-making optimisation problem (MODM). See <strong>Decision trees*9, genetic algorithms*10</strong> .</p>
<p>Furthermore, a DM problem can have different 'agents' making decision that can affect ours. So that is known as 'multi-agent' decision-making. In computer science, <strong>multi-agent system simulations</strong> are commonly used to research these problems. *11</p>
<p>You can also have the case where the agents have to make a collaborative decision that affects all of them. So that is known as 'group' decision-making.</p>
<p>In the industry, computational DM can be seen with the widely used <strong>recommender systems</strong> such as the ones in Netflix or Amazon.*13 In the B2B sector, AI in DM can be seen in decision-support systems and <strong>prescriptive analytics</strong> services *14.</p>
<p>I hope you find that information useful. There is indeed, much more about this complex topic, I just tried to summarise.</p>
<p>Some resources you might want to check:</p>
<ul>
<li><strong>Deep RTS:</strong> A playground for reinforcement learning agents in real-time strategy game environments. (Repository: <a href="https://github.com/cair/deep-rts" rel="nofollow noreferrer">https://github.com/cair/deep-rts</a>) (Pre-print Paper: <a href="https://arxiv.org/abs/1808.05032" rel="nofollow noreferrer">https://arxiv.org/abs/1808.05032</a>)</li>
<li><strong>OpenAI Gym:</strong> A general-purpose playground to test reinforcement learning AI algorithms. (Github: <a href="https://github.com/openai/gym" rel="nofollow noreferrer">https://github.com/openai/gym</a>, page: <a href="https://gym.openai.com/" rel="nofollow noreferrer">https://gym.openai.com/</a>)</li>
<li><strong>DecisionRadar:</strong> An online application to apply TOPSIS decision-making algorithm. (Site: <a href="https://decision-radar.com/" rel="nofollow noreferrer">https://decision-radar.com/</a>)</li>
<li><strong>AgentSimJS:</strong> A 3D multi-agent simulation system built in Javascript. (Repository: <a href="https://github.com/maxdeben83/agentsimjs" rel="nofollow noreferrer">https://github.com/maxdeben83/agentsimjs</a>)</li>
</ul>
<p><strong>REFERENCES:</strong></p>
<ul>
<li>*1 Atkinson, J. W. (1964). An introduction to motivation.</li>
<li>*1 Berridge, K. C. (2004). Motivation concepts in behavioral neuroscience. Physiology & behavior, 81(2), 179-209.</li>
<li>*1 Hwang, C. L., & Yoon, K. (1981). Methods for multiple attribute decision making. In Multiple attribute decision making (pp. 58-191).Springer, Berlin, Heidelberg. </li>
<li>*1 Tversky, A., & Kahneman, D.(1981). The framing of decisions and the psychology of choice science, 211(4481), 453-458. </li>
<li>*2 Littman, M. L. (1994). Markov games as a framework for multi-agent reinforcement learning. In Machine Learning Proceedings 1994 (pp. 157-163). </li>
<li><p>*3 Van Hasselt, H., Guez, A., & Silver, D. (2016, February). Deep Reinforcement Learning with Double Q-Learning. In AAAI (Vol. 2, p. 5).</p></li>
<li><p>*4 Aczél, J., & Saaty, T. L. (1983). Procedures for synthesizing ratio judgements. Journal of Mathematical Psychology, 27(1),
93–102. doi:10.1016/0022-2496(83)90028-7</p></li>
<li><p>*4 Saaty, R. W. (1987). The analytic hierarchy process—what it is and how it is used. Mathematical Modelling, 9(3-5), 167.<br>
doi:10.1016/0270-0255(87)90473-8</p></li>
<li><p>*4 Saaty, T. L. (1986). Axiomatic Foundation of the Analytic Hierarchy Process. Management Science, 32(7), 841.<br>
doi:10.1287/mnsc.32.7.841</p></li>
<li><p>*4 Hwang, C. L., & Yoon, K. (1981). Methods for multiple attribute decision making. In Multiple attribute decision
making (pp. 58-191). Springer, Berlin, Heidelberg.</p></li>
<li><p>*6 Zhou, Y. (1915). Multi-Criteria Decision Making in Software Development: A Systematic Literature Review.</p></li>
<li><p>*7 Zhou, Y. (1915). Multi-Criteria Decision Making in Software Development: A Systematic Literature Review.</p></li>
<li><p>*8 Pramanik, S., Biswas, P., & Giri, B. C. (2015). Hybrid vector similarity measures and their<br>
applications to multi-attribute decision making under neutrosophic
environment. Neural Computing and Applications, 28(5), 1163<br>
doi:10.1007/s00521-015-2125-3</p></li>
<li><p>*8 Mardani, A., Nilashi, M., Zavadskas, E. K., Awang, S. R., Zare, H., & Jamal, N. M. (2018). Decision Making Methods Based on Fuzzy
Aggregation Operators: Three Decades Review from 1986 to 2017.<br>
International Journal of Information Technology & Decision Making,
17(02), 391–466. doi:10.1142/s021962201830001x</p></li>
<li><p>*9 Zhao, H. (2007). A multi-objective genetic programming approach to developing Pareto optimal decision trees. Decision Support
Systems, 43(3), 809-826.</p></li>
<li><p>*9 Laumanns, M., & Ocenasek, J. (2002, September). Bayesian optimization algorithms for multi-objective optimization. In<br>
International Conference on Parallel Problem Solving from Nature (pp.
298-307). Springer, Berlin, Heidelberg.</p></li>
<li><p>*9 Jin, Y. (Ed.). (2006). Multi-objective machine learning (Vol. 16). Springer Science & Business Media.</p></li>
<li><p>10 Tamaki, H., Kita, H., & Kobayashi, S. (1996, May).
Multi-objective optimization by genetic algorithms: A review.
In Evolutionary Computation, 1996., Proceedings of IEEE<br>
International Conference on (pp. 517-522). IEEE.</p></li>
<li><p>*11 Rodriguez, S., Gaud, N., & Galland, S. (2014, August). SARL: a general-purpose agent-oriented programming language. In
Web Intelligence (WI) and Intelligent Agent Technologies (IAT),
2014 IEEE/WIC/ACM International Joint Conferences on (Vol. 3,
pp. 103-110). IEEE.</p></li>
<li><p>*12 Rao, A. S. (1996, January). AgentSpeak (L): BDI agents speak out in a logical computable language. In European Workshop on Modelling Autonomous Agents in a Multi-Agent World (pp. 42-55). Springer, Berlin, Heidelberg.</p></li>
<li><p>*13 Ricci, F., Rokach, L., & Shapira, B. (2015). Recommender systems: introduction and challenges. In Recommender systems
handbook (pp. 1-34). Springer, Boston, MA.</p></li>
<li><p>*14 <a href="https://www.ibm.com/analytics/prescriptive-analytics" rel="nofollow noreferrer">https://www.ibm.com/analytics/prescriptive-analytics</a></p></li>
</ul> | 2018-11-22 09:58:15.103000+00:00 | 2018-11-22 13:16:39.077000+00:00 | 2018-11-22 13:16:39.077000+00:00 | null | 5,320,506 | <p>Do you know <strong>any example</strong> for this topic?.
I have searched google but had no luck with any Decision Making using Artificial Intelligence example ( at least any truly developed)</p> | 2011-03-16 02:48:08.297000+00:00 | 2020-10-22 09:05:47.857000+00:00 | null | artificial-intelligence | ['https://github.com/cair/deep-rts', 'https://arxiv.org/abs/1808.05032', 'https://github.com/openai/gym', 'https://gym.openai.com/', 'https://decision-radar.com/', 'https://github.com/maxdeben83/agentsimjs', 'https://www.ibm.com/analytics/prescriptive-analytics'] | 7 |
67,884,754 | <p><strong>It really depends on your dataset.</strong></p>
<p>During the training, Unet will try to learn specific features in the images, such as baby's shape, body size, color, etc. If your dataset is good enough (e.g. contains lots of babies examples and lots of adults with a separate color and the image dimensions are not that high) then You probably won't have any problems at all.</p>
<p>There is a possibility however, that your model misses some babies or adults in an image. To tackle this issue, There are a couple of things you can do:</p>
<ol>
<li>Add Data Augmentation techniques during the training (e.g. random crop, padding, brightness, contrast, etc.)</li>
<li>You can make your model stronger by replacing Unet model with a new approach, such as <strong>Unet++ or Unet3+</strong>. According to Unet3+ paper, it seems that it is able to outperform both Unet & Unet++ in medical image segmentation tasks:
<a href="https://arxiv.org/ftp/arxiv/papers/2004/2004.08790.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/arxiv/papers/2004/2004.08790.pdf</a></li>
</ol>
<p>Also, I have found this repository, which contains a clean implementation of Unet3+, which might help you get started:
<a href="https://github.com/kochlisGit/Unet3-Plus" rel="nofollow noreferrer">https://github.com/kochlisGit/Unet3-Plus</a></p> | 2021-06-08 09:14:07.403000+00:00 | 2021-06-08 09:14:07.403000+00:00 | null | null | 62,699,213 | <p>I am trying to solve Baby detection with unet segmentation model. I already collected baby images, baby segments and also passed the adult images as negative (created black segments for this).</p>
<p>So if I will do in this way is unet model can differentiate the adults and babies? if not what I have to do next?</p> | 2020-07-02 14:56:18.853000+00:00 | 2021-06-08 09:14:07.403000+00:00 | null | deep-learning|classification|vgg-net|semantic-segmentation | ['https://arxiv.org/ftp/arxiv/papers/2004/2004.08790.pdf', 'https://github.com/kochlisGit/Unet3-Plus'] | 2 |
21,738,608 | <p>The importance of a feature is captured by computing how much the learned model depends on a feature <em>f</em>.</p>
<p>A perceptron is a simple feed-forward neural network, and for a neural network (which is a real-valued nonlinear function), dependency corresponds to the partial derivative of output function with respect to <em>f</em>.</p>
<p>The relative importance of a feature is proportional to its average absolute weight on a trained perceptron. This is not always true for neural networks in general. For instance, this need not hold true for multi-layer perceptrons.</p>
<p>For more details (typing the exact formula here will be a notational mess), look at <a href="http://arxiv.org/pdf/1005.5556.pdf" rel="nofollow">sections 2 and 3 of this paper</a>. I believe equation (8) (in section 3) is what you are looking for.</p>
<p>There, the score is a summation over multiple learners. If yours is a single-layer perceptron, the function learned is a single weight vector</p>
<p><strong>w</strong> = (<em>w1</em>, <em>w2</em>, ... <em>wn</em>)</p>
<p>Then, the <em>average absolute weight</em> I mention at the beginning is simply the absolute weight |<em>wi</em>| of the <em>i</em>-th feature. This seems too simple a measure to be ranking the importance of features, right? But ... if you think about it, an n-dimensional input <strong>x</strong> gets transformed to <strong>w</strong> . <strong>x</strong> (the vector dot product). That is, the <em>i</em>-th weight <em>wi</em> fully controls how much the input changes along one dimension of the vector space.</p>
<p>By the way, in most (if not all) classifiers, the feature weight is itself the measure of its importance. It's just that the weights are computed in more complicated ways for most other classifiers.</p> | 2014-02-12 20:12:38.530000+00:00 | 2014-02-12 21:54:42.740000+00:00 | 2014-02-12 21:54:42.740000+00:00 | null | 21,738,277 | <p>For one of my assignments in my AI class we were tasked with creating a perceptron learning implementation of the Widrow Hoff delta rule. I've coded this implementation in java:</p>
<p>The following github link contains the project:
<a href="https://github.com/dmcquillan314/CS440-Homework/tree/master/CS440-HW2-1" rel="nofollow">https://github.com/dmcquillan314/CS440-Homework/tree/master/CS440-HW2-1</a></p>
<p>The issue that I'm having is not with the creation of the perceptron. That is working fine.</p>
<p>In the project after training the perceptron I then applied an unclassified dataset to the perceptron to then learn the classifications of each input vector. This also worked fine.</p>
<p>My issue pertains to learning which feature of the inputs is the most important.</p>
<p>For example, if the feature set within each input vector was color, car model, and car make and we wanted to classify which feature was the most important. How would one go about doing so.</p>
<p>My original understanding of this led me to believe that calculating the correlation coefficient the value of that feature for each input and the classification vector that is produced. However, this turned out to be a false assumption.</p>
<p>Is there some other way that the most important feature can be learned?</p>
<p><strong>EDIT</strong></p>
<p>Sample weight vector:</p>
<p>( -752, 4771, 17714, 762, 6, 676, 3060, -2004, 5459, 9591.299, 3832, 14963, 20912 )</p>
<p>Sample input vectors:</p>
<p>(55, 1, 2, 130, 262, 0, 0, 155, 0, 0, 1, 0, 3, 0)</p>
<p>(59, 1, 3, 126, 218, 1, 0, 134, 0, 2.2, 2, 1, 6, 1)</p>
<p>(45, 1, 2, 128, 308, 0, 2, 170, 0, 0, 1, 0, 3, 0)</p>
<p>(59, 1, 4, 110, 239, 0, 2, 142, 1, 1.2, 2, 1, 7, 1)</p>
<p>The last element is the classification.</p>
<p>I will post an answer here when I find one. So far I believe that the answer given by the instructor is inaccurate.</p> | 2014-02-12 19:55:49.790000+00:00 | 2019-12-22 21:44:16.597000+00:00 | 2014-02-13 00:24:38.537000+00:00 | java|machine-learning|artificial-intelligence|perceptron | ['http://arxiv.org/pdf/1005.5556.pdf'] | 1 |
10,278,873 | <p>Just been looking at a similar issue (google: "testing scientific software") and came up with a few papers that may be of interest. These cover both the mundane coding errors and the bigger issues of knowing if the result is even right (depth of the Earth's mantle?) </p>
<p><a href="http://http.icsi.berkeley.edu/ftp/pub/speech/papers/wikipapers/cox_harris_testing_numerical_software.pdf" rel="nofollow noreferrer">http://http.icsi.berkeley.edu/ftp/pub/speech/papers/wikipapers/cox_harris_testing_numerical_software.pdf</a></p>
<p><a href="http://www.cs.ua.edu/~SECSE09/Presentations/09_Hook.pdf" rel="nofollow noreferrer">http://www.cs.ua.edu/~SECSE09/Presentations/09_Hook.pdf</a> (broken link; new link is <a href="http://www.se4science.org/workshops/secse09/Presentations/09_Hook.pdf" rel="nofollow noreferrer">http://www.se4science.org/workshops/secse09/Presentations/09_Hook.pdf</a>)</p>
<p><a href="http://www.associationforsoftwaretesting.org/?dl_name=DianeKellyRebeccaSanders_TheChallengeOfTestingScientificSoftware_paper.pdf" rel="nofollow noreferrer">http://www.associationforsoftwaretesting.org/?dl_name=DianeKellyRebeccaSanders_TheChallengeOfTestingScientificSoftware_paper.pdf</a></p>
<p>I thought the idea of mutation testing described in 09_Hook.pdf (see also matmute.sourceforge.net) is particularly interesting as it mimics the simple mistakes we all make. The hardest part is to learn to use statistical analysis for confidence levels, rather than single pass code reviews (man or machine).</p>
<p>The problem is not new. I'm sure I have an original copy of "How accurate is scientific software?" by Hatton et al Oct 1994, that even then showed how different implementations of the same theories (as algorithms) diverged rather rapidly (It's also ref 8 in Kelly & Sanders paper)</p>
<p>--- (Oct 2019)
More recently <a href="https://arxiv.org/pdf/1804.01954.pdf" rel="nofollow noreferrer">Testing Scientific Software: A Systematic Literature Review</a> </p> | 2012-04-23 10:32:12.340000+00:00 | 2019-10-04 09:41:17.457000+00:00 | 2019-10-04 09:41:17.457000+00:00 | null | 3,421,469 | <p>I'm convinced that software testing indeed is very important, especially in science. However, over the last 6 years, I never have come across any scientific software project which was under regular tests (and most of them were not even version controlled). </p>
<p>Now I'm wondering how you deal with software tests for scientific codes (numerical computations). </p>
<p>From my point of view, standard unit tests often miss the point, since there is no exact result, so using <code>assert(a == b)</code> might prove a bit difficult due to "normal" numerical errors. </p>
<p>So I'm looking forward to reading your thoughts about this. </p> | 2010-08-06 06:30:06.843000+00:00 | 2019-10-04 09:41:17.457000+00:00 | 2017-10-05 21:52:09.770000+00:00 | unit-testing|testing|scientific-computing | ['http://http.icsi.berkeley.edu/ftp/pub/speech/papers/wikipapers/cox_harris_testing_numerical_software.pdf', 'http://www.cs.ua.edu/~SECSE09/Presentations/09_Hook.pdf', 'http://www.se4science.org/workshops/secse09/Presentations/09_Hook.pdf', 'http://www.associationforsoftwaretesting.org/?dl_name=DianeKellyRebeccaSanders_TheChallengeOfTestingScientificSoftware_paper.pdf', 'https://arxiv.org/pdf/1804.01954.pdf'] | 5 |
56,107,148 | <p>I think you have misinterpreted MAP-Elites. You are tracking an append-only population separately from the elites, but according to <a href="https://arxiv.org/pdf/1504.04909.pdf" rel="nofollow noreferrer">this reference</a> you are supposed to track only the elites. So I think the line to select the current specimen should be:</p>
<pre><code>current_specimen, _ = random.choice(list(best_in_class.values()))
</code></pre>
<p>So the maximum population size would be 10 for MNIST. I'm not 100% sure that this is your main problem, but it should at least make the algorithm more greedy, moving search away from the oldest solutions.</p>
<h3>Modification of MAP-Elites</h3>
<p>Thinking about this some more, I'm not sure why this method should work on MNIST with direct pixel parameters. It would be very unlikely to generate a different digit just by random mutation, especially not after a few steps of optimization towards that same class, with all the other classes remaining empty.</p>
<p>It seems your implementation is actually doing this part correctly according to the paper:</p>
<blockquote>
<p>[...] and replaces the current champion for any objective if the new
individual has higher fitness on that objective.</p>
</blockquote>
<p>But in the original MAP-Elites, the goal was to track a single global fitness metric over the whole population. It was possible that some cells remained empty because no solution was ever mapped to this cell. The paper actually tracks a different fitness metric for each cell. This should have been stated more prominently as a modification of MAP-Elites. With this modification it seems plausible that this should work, but it no longer makes sense to allow empty cells.</p> | 2019-05-13 06:57:48.253000+00:00 | 2019-05-13 08:11:09.633000+00:00 | 2019-05-13 08:11:09.633000+00:00 | null | 56,105,489 | <p>I am trying to implement evolutionary algorithm using MAP-Elites evolutionary strategy and polynomial mutation operator (as defined <a href="https://pdfs.semanticscholar.org/3461/99d0c8fce4bd91be855155bb1fe71844e9ac.pdf" rel="nofollow noreferrer">here</a> at point 2.) as discussed in <a href="https://arxiv.org/pdf/1412.1897.pdf" rel="nofollow noreferrer">this</a> paper. I created a three different MNIST models and trained them using tensorflow==2.0.0a using Keras API. The models are performing fine (all around 95% accuracy). </p>
<p>My understanding of the mentioned evolutionary strategy is that we create a starting population and then with each iteration mutate a randomly chosen specimen from that population. If after the mutation the new specimen gets the certainty of belonging to any class higher than a previously chosen best specimen for that class then we treat it as the best and add it to population. The expected result is that after algorithm is finished we should have a specimen for each of the classes that manages to get classified within that class with high certainty. The initial population of images is created is created using uniform distribution.</p>
<p>The issue is that my models classify random input created with uniform distribution always as the same class with high certainty (i.e CNN model always classifies it as 8). So most or all specimens i end up with are being classified as the same class (with slightly varying certainties of belonging to other classes) even after big starting population and number of iterations (i.e 1000 starting specimens and 20000 iterations).</p>
<p>Input samples are normalized to range [0.0, 1.0]. All of the reasoning bellow is constrained for simplification sake just to Dense model (CNN, and simplified LeNet5 yield similar results) described at the bottom.</p>
<p>Using normal distribution with mean=0.0 and stddev=0.3 or mean=0.5 and stddev=0.3 for generating starting population and mutation chance of 0.3 (instead of 0.1 as in paper) yields similar results.</p>
<p>I tried using (1 , λ) evolutionary strategy with targeting just one class (starting population 100, 100 generations) and it yields better results than the MAP-Elites implemented bellow (i can generate specimen for more than one class). </p>
<p>I tried not normalizing the data for model and training it again using [0, 255] range but the results were almost the same. I also tried using Gaussian mutation operator instead of polynomial but it didn't seem to make much of a difference.</p>
<p>Turning off data augmentation when training doesn't seem to have effect.</p>
<p>Here is the implementation i wrote.</p>
<pre class="lang-py prettyprint-override"><code>def evolutionary_attack(model, population_size, generations_nmb, min=0.0, max=1.0, mutation_chance=0.1, mutation_power=15):
population = [] #
best_in_class = {} #dictionary of specimen performing best for given class
for x in range(population_size):
population.append(np.random.uniform(min, max, model.get_input_shape())) #initialize specimens with random values
# population.append(np.random.normal(0.0, 0.3, model.get_input_shape())) #initialize specimens with random values
for i in range(generations_nmb):
current_specimen = random.choice(population) #choose specimen at random from the population
mutated_specimen = mutate_specimen(current_specimen, min, max, mutation_chance, mutation_power)
logits = model(tf.expand_dims(tf.convert_to_tensor(mutated_specimen, dtype=tf.float32), 0))
certainties_per_class = tf.squeeze(logits).numpy()
for cur_class in range(len(certainties_per_class)):
if cur_class in best_in_class:
_, best_certainty = best_in_class[cur_class]
if certainties_per_class[cur_class] > best_certainty:
#if the new specimen performs better in given class make it a new best and add it to the population
best_in_class[cur_class] = (mutated_specimen, certainties_per_class[cur_class])
population.append(mutated_specimen)
else:
best_in_class[cur_class] = (mutated_specimen, certainties_per_class[cur_class]) #handles the case when there is no best specimen for the given class
def mutate_specimen(specimen, min, max, mutation_chance, mutation_power=15):
specimen = np.copy(specimen)
with np.nditer(specimen, op_flags=['readwrite']) as it:
for old_val in it:
if np.random.uniform() < mutation_chance:
u = np.random.uniform()
if u <= 0.5:
delta = ((2.0 * u) ** (1.0 / (mutation_power))) - 1.0
new_val = old_val + delta * (old_val - min)
else:
delta = 1.0 - (2 * (1 - u))** (1 / (1 + mutation_power))
new_val = old_val + delta * (max - old_val)
old_val[...] = new_val
return np.clip(specimen, min, max)
</code></pre>
<p>In the paper authors state that they were able to get specimens for each digit classified with >99.99% confidence after 50 generations. This is vastly different from the results that i get. It seems that i am doing something wrong but i am unable to pinpoint the issue. I am not sure whether it is just some small bug in the code or is my reasoning behind implementation wrong.</p>
<p>My model is constructed like this</p>
<h1>DenseModel (sigmoid activation functions on all layers except of last one)</h1>
<p>input_1 (InputLayer) [(None, 28, 28, 1)] 0 </p>
<hr>
<p>flatten (Flatten) (None, 784) 0 </p>
<hr>
<p>dense (Dense) (None, 784) 615440 </p>
<hr>
<p>dense_1 (Dense) (None, 800) 628000 </p>
<hr>
<p>dense_2 (Dense) (None, 800) 640800 </p>
<hr>
<p>dense_3 (Dense) (None, 10) 8010 </p>
<p>It have been trained for multiple epochs with data augmentation using Adam optimizer.</p>
<p>EDIT: i just noticed that i am not clipping values of specimens after mutation. If i do that then using normal distribution yields similar results to using uniform distribution. i fixed this in posted code. stupid mistake</p> | 2019-05-13 03:42:23.257000+00:00 | 2019-05-13 08:11:09.633000+00:00 | 2019-05-13 05:14:07.613000+00:00 | python|tensorflow|machine-learning|keras|neural-network | ['https://arxiv.org/pdf/1504.04909.pdf'] | 1 |
48,612,144 | <p>In order to guarantee that no party in the channel has tampered data in its own favor you need to present sophisticated endorsement policy to include all required parties and make sure they adequately represented within endorsement policy. Hence making it obligatory for client which issues new transaction to get endorsement from all interested parties, hence ensuring that all have same consistent state. For example if you have two organizations <code>Org1</code> and <code>Org2</code> and they do not trust each other, you would like to create endorsement policy:</p>
<pre><code>AND(Org1.memmber, Org2.member)
</code></pre>
<p>Therefore client will have to collect endorsements from peers of both organization to consider the transaction valid those endorsement have to sign same bytes, which won't be the case if data was forged. You can read more about endorsements in <a href="http://hyperledger-fabric.readthedocs.io/en/release/endorsement-policies.html" rel="nofollow noreferrer">official documentation</a>. There is also a recent <a href="https://arxiv.org/abs/1801.10228" rel="nofollow noreferrer">publication</a> of Fabric architecture which explains it in more details.</p> | 2018-02-04 19:27:50.137000+00:00 | 2018-02-04 19:27:50.137000+00:00 | null | null | 48,611,238 | <p>Can someone explain how the immutability is implemented in Hyperledger Fabric? If we have private channel with little amount of peers, how it can be guaranteed, that one side hasn't changed data in it's ledger? </p> | 2018-02-04 17:53:21.723000+00:00 | 2018-10-01 02:44:22.720000+00:00 | 2018-10-01 02:44:22.720000+00:00 | hyperledger-fabric|immutability|blockchain | ['http://hyperledger-fabric.readthedocs.io/en/release/endorsement-policies.html', 'https://arxiv.org/abs/1801.10228'] | 2 |
19,807,787 | <p>I can advise you to use <a href="http://www.robots.ox.ac.uk/%7Evgg/research/affine/det_eval_files/mikolajczyk_ijcv2004.pdf" rel="nofollow noreferrer">Hessian-Affine</a> and <a href="http://www.robots.ox.ac.uk/%7Evgg/research/affine/det_eval_files/matas_bmvc2002.pdf" rel="nofollow noreferrer">MSER</a> for detection, if you need invariance to different factors (e.g., viewpoint change) or FAST, if you need real time.
FAST is doing similar job to the Harris, but much faster.</p>
<p>You can look into "<a href="http://epubs.surrey.ac.uk/726872/1/Tuytelaars-FGV-2008.pdf" rel="nofollow noreferrer">Local Invariant Feature Detectors: A Survey</a>", and "<a href="http://www.robots.ox.ac.uk/%7Evgg/research/affine/det_eval_files/vibes_ijcv2004.pdf" rel="nofollow noreferrer">A Comparison of Affine Region Detectors</a>" where many detectors are tested and described very well.</p>
<p><strong>Update:</strong> "<a href="http://arxiv.org/abs/1504.06603" rel="nofollow noreferrer">WxBS: Wide Baseline Stereo Generalizations</a>" does extended benchmark of the novel and classical detectors and descriptors.</p>
<p>Second, the description part is usually slower than detection, so to be real-time you have to use GPU or binary descriptor like <a href="http://cvlabwww.epfl.ch/%7Elepetit/papers/calonder_eccv10.pdf" rel="nofollow noreferrer">BRIEF</a> or <a href="http://www.ivpe.com/freak.htm" rel="nofollow noreferrer">FREAK</a>.</p>
<p><strong>Update2:</strong> "<a href="https://github.com/featw/hbench" rel="nofollow noreferrer">HPatches (Homography Patches) dataset and benchmark</a>" and corresponding workshop at ECCV 2016. <a href="http://www.iis.ee.ic.ac.uk/ComputerVision/DescrWorkshop/index.html" rel="nofollow noreferrer">http://www.iis.ee.ic.ac.uk/ComputerVision/DescrWorkshop/index.html</a> .</p>
<p><strong>Update3:</strong> "<a href="https://cvg.ethz.ch/research/local-feature-evaluation/" rel="nofollow noreferrer">Comparative Evaluation of Hand-Crafted and Learned Local Features</a>" Descriptors (and a bit detectors) evaluation on large-scale 3D reconstruction task CVPR 2017 .</p>
<p><strong>Update4:</strong> "<a href="https://arxiv.org/abs/1809.11039" rel="nofollow noreferrer">Interest point detectors stability evaluation on ApolloScape dataset</a>" Detector evaluation on authonomous driving dataset, ECCVW2018 .</p>
<p><strong>Update5:</strong> "<a href="https://arxiv.org/abs/1807.10254" rel="nofollow noreferrer">From handcrafted to deep local invariant features</a>" Huuuge survey-overview paper about handcrafted and learned features, 2018.</p>
<p><strong>Update6:</strong> "<a href="https://arxiv.org/abs/2003.01587" rel="nofollow noreferrer">Image Matching across Wide Baselines: From Paper to Practice</a>" Large scale benchmark of the abovementioned and more recent methods for the camera pose estimation. IJCV, 2020.</p> | 2013-11-06 09:05:12.080000+00:00 | 2021-01-11 16:12:42.640000+00:00 | 2021-01-11 16:12:42.640000+00:00 | null | 18,437,878 | <p>There are several kinds of detectors and descriptors, like SIFT, SURF, FAST. I wonder are they all eligible for real-time applications? Which is the best or better?</p>
<p>And furthermore, is Harris-Laplacian dectector still useful when we already have the above three? Is it better than them?</p> | 2013-08-26 06:25:57.467000+00:00 | 2021-01-11 16:12:42.640000+00:00 | null | image-processing|feature-detection | ['http://www.robots.ox.ac.uk/%7Evgg/research/affine/det_eval_files/mikolajczyk_ijcv2004.pdf', 'http://www.robots.ox.ac.uk/%7Evgg/research/affine/det_eval_files/matas_bmvc2002.pdf', 'http://epubs.surrey.ac.uk/726872/1/Tuytelaars-FGV-2008.pdf', 'http://www.robots.ox.ac.uk/%7Evgg/research/affine/det_eval_files/vibes_ijcv2004.pdf', 'http://arxiv.org/abs/1504.06603', 'http://cvlabwww.epfl.ch/%7Elepetit/papers/calonder_eccv10.pdf', 'http://www.ivpe.com/freak.htm', 'https://github.com/featw/hbench', 'http://www.iis.ee.ic.ac.uk/ComputerVision/DescrWorkshop/index.html', 'https://cvg.ethz.ch/research/local-feature-evaluation/', 'https://arxiv.org/abs/1809.11039', 'https://arxiv.org/abs/1807.10254', 'https://arxiv.org/abs/2003.01587'] | 13 |
41,799,997 | <p>I like to do the following, yet the only thing I know is that some parameters prefers not to be regularized with L2, such as batch norm parameters and biases. LSTMs contains one Bias tensor (despite conceptually it has many biases, they seem to be concatenated or something, for performance), and for the batch normalization I add "noreg" in the variables' name to ignore it too. </p>
<pre><code>loss = your regular output loss
l2 = lambda_l2_reg * sum(
tf.nn.l2_loss(tf_var)
for tf_var in tf.trainable_variables()
if not ("noreg" in tf_var.name or "Bias" in tf_var.name)
)
loss += l2
</code></pre>
<p>Where <code>lambda_l2_reg</code> is the small multiplier, e.g.: <code>float(0.005)</code></p>
<p>Doing this selection (which is the full <code>if</code> in the loop discarding some variables in the regularization) once made me jump <strong>from 0.879 F1 score to 0.890 in one shot</strong> of testing the code without readjusting the value of the config's <code>lambda</code>, well this was including both the changes for the batch normalisation and the Biases and I had other biases in the neural network.</p>
<p>According to <a href="http://jmlr.csail.mit.edu/proceedings/papers/v28/pascanu13.pdf" rel="noreferrer">this paper</a>, regularizing the recurrent weights may help with exploding gradients. </p>
<p>Also, according to <a href="https://arxiv.org/pdf/1409.2329.pdf" rel="noreferrer">this other paper</a>, dropout would be better used between stacked cells and not inside cells if you use some. </p>
<p>About the exploding gradient problem, if you use gradient clipping with the loss that has the L2 regularization already added to it, that regularization will be taken into account too during the clipping process.</p>
<hr>
<p>P.S. Here is the neural network I was working on: <a href="https://github.com/guillaume-chevalier/HAR-stacked-residual-bidir-LSTMs" rel="noreferrer">https://github.com/guillaume-chevalier/HAR-stacked-residual-bidir-LSTMs</a> </p> | 2017-01-23 05:51:32.770000+00:00 | 2017-03-31 18:34:03.660000+00:00 | 2017-03-31 18:34:03.660000+00:00 | null | 37,571,514 | <p>Tensorflow offers a nice LSTM wrapper.</p>
<pre><code>rnn_cell.BasicLSTM(num_units, forget_bias=1.0, input_size=None,
state_is_tuple=False, activation=tanh)
</code></pre>
<p>I would like to use regularization, say L2 regularization. However, I don't have direct access to the different weight matrices used in the LSTM cell, so I cannot explicitly do something like</p>
<pre><code>loss = something + beta * tf.reduce_sum(tf.nn.l2_loss(weights))
</code></pre>
<p>Is there a way to access the matrices or use regularization somehow with LSTM?</p> | 2016-06-01 14:26:24.537000+00:00 | 2017-03-31 18:34:03.660000+00:00 | null | neural-network|tensorflow|lstm|recurrent-neural-network | ['http://jmlr.csail.mit.edu/proceedings/papers/v28/pascanu13.pdf', 'https://arxiv.org/pdf/1409.2329.pdf', 'https://github.com/guillaume-chevalier/HAR-stacked-residual-bidir-LSTMs'] | 3 |
52,324,460 | <p>Facebook Research group released a new solution called InferSent
Results and code are published on Github, check their repo. It is pretty awesome. I am planning to use it.
<a href="https://github.com/facebookresearch/InferSent" rel="nofollow noreferrer">https://github.com/facebookresearch/InferSent</a> </p>
<p>their paper
<a href="https://arxiv.org/abs/1705.02364" rel="nofollow noreferrer">https://arxiv.org/abs/1705.02364</a>
Abstract:
Many modern NLP systems rely on word embeddings, previously trained in an unsupervised manner on large corpora, as base features. Efforts to obtain embeddings for larger chunks of text, such as sentences, have however not been so successful. Several attempts at learning unsupervised representations of sentences have not reached satisfactory enough performance to be widely adopted. In this paper, we show how universal sentence representations trained using the supervised data of the Stanford Natural Language Inference datasets can consistently outperform unsupervised methods like SkipThought vectors on a wide range of transfer tasks. Much like how computer vision uses ImageNet to obtain features, which can then be transferred to other tasks, our work tends to indicate the suitability of natural language inference for transfer learning to other NLP tasks. Our encoder is publicly available.</p> | 2018-09-14 03:12:30.300000+00:00 | 2018-09-14 03:12:30.300000+00:00 | null | null | 22,129,943 | <p>According to the <a href="http://radimrehurek.com/gensim/models/word2vec.html" rel="noreferrer">Gensim Word2Vec</a>, I can use the word2vec model in gensim package to calculate the similarity between 2 words.</p>
<p>e.g.</p>
<pre><code>trained_model.similarity('woman', 'man')
0.73723527
</code></pre>
<p>However, the word2vec model fails to predict the sentence similarity. I find out the LSI model with sentence similarity in gensim, but, which doesn't seem that can be combined with word2vec model. The length of corpus of each sentence I have is not very long (shorter than 10 words). So, are there any simple ways to achieve the goal?</p> | 2014-03-02 16:04:53.743000+00:00 | 2020-10-26 10:37:57.237000+00:00 | 2016-04-12 13:10:22.413000+00:00 | python|gensim|word2vec | ['https://github.com/facebookresearch/InferSent', 'https://arxiv.org/abs/1705.02364'] | 2 |
40,671,243 | <p>I have tried the methods provided by the previous answers. It works, but the main drawback of it is that the longer the sentences the larger similarity will be(to calculate the similarity I use the cosine score of the two mean embeddings of any two sentences) since the more the words the more positive semantic effects will be added to the sentence. </p>
<p>I thought I should change my mind and use the sentence embedding instead as studied in <a href="https://arxiv.org/abs/1502.06922" rel="nofollow noreferrer">this paper</a> and <a href="http://aclweb.org/anthology/W16-1628" rel="nofollow noreferrer">this</a>. </p> | 2016-11-18 07:07:04.613000+00:00 | 2017-01-05 07:05:16.587000+00:00 | 2017-01-05 07:05:16.587000+00:00 | null | 22,129,943 | <p>According to the <a href="http://radimrehurek.com/gensim/models/word2vec.html" rel="noreferrer">Gensim Word2Vec</a>, I can use the word2vec model in gensim package to calculate the similarity between 2 words.</p>
<p>e.g.</p>
<pre><code>trained_model.similarity('woman', 'man')
0.73723527
</code></pre>
<p>However, the word2vec model fails to predict the sentence similarity. I find out the LSI model with sentence similarity in gensim, but, which doesn't seem that can be combined with word2vec model. The length of corpus of each sentence I have is not very long (shorter than 10 words). So, are there any simple ways to achieve the goal?</p> | 2014-03-02 16:04:53.743000+00:00 | 2020-10-26 10:37:57.237000+00:00 | 2016-04-12 13:10:22.413000+00:00 | python|gensim|word2vec | ['https://arxiv.org/abs/1502.06922', 'http://aclweb.org/anthology/W16-1628'] | 2 |
44,821,840 | <p>The training parameters used were recorded in a comment in dlib's code here <a href="http://dlib.net/dlib/image_processing/frontal_face_detector.h.html" rel="nofollow noreferrer">http://dlib.net/dlib/image_processing/frontal_face_detector.h.html</a>. For reference:</p>
<pre><code> It is built out of 5 HOG filters. A front looking, left looking, right looking,
front looking but rotated left, and finally a front looking but rotated right one.
Moreover, here is the training log and parameters used to generate the filters:
The front detector:
trained on mirrored set of labeled_faces_in_the_wild/frontal_faces.xml
upsampled each image by 2:1
used pyramid_down<6>
loss per missed target: 1
epsilon: 0.05
padding: 0
detection window size: 80 80
C: 700
nuclear norm regularizer: 9
cell_size: 8
num filters: 78
num images: 4748
Train detector (precision,recall,AP): 0.999793 0.895517 0.895368
singular value threshold: 0.15
The left detector:
trained on labeled_faces_in_the_wild/left_faces.xml
upsampled each image by 2:1
used pyramid_down<6>
loss per missed target: 2
epsilon: 0.05
padding: 0
detection window size: 80 80
C: 250
nuclear norm regularizer: 8
cell_size: 8
num filters: 63
num images: 493
Train detector (precision,recall,AP): 0.991803 0.86019 0.859486
singular value threshold: 0.15
The right detector:
trained left-right flip of labeled_faces_in_the_wild/left_faces.xml
upsampled each image by 2:1
used pyramid_down<6>
loss per missed target: 2
epsilon: 0.05
padding: 0
detection window size: 80 80
C: 250
nuclear norm regularizer: 8
cell_size: 8
num filters: 66
num images: 493
Train detector (precision,recall,AP): 0.991781 0.85782 0.857341
singular value threshold: 0.19
The front-rotate-left detector:
trained on mirrored set of labeled_faces_in_the_wild/frontal_faces.xml
upsampled each image by 2:1
used pyramid_down<6>
rotated left 27 degrees
loss per missed target: 1
epsilon: 0.05
padding: 0
detection window size: 80 80
C: 700
nuclear norm regularizer: 9
cell_size: 8
num images: 4748
singular value threshold: 0.12
The front-rotate-right detector:
trained on mirrored set of labeled_faces_in_the_wild/frontal_faces.xml
upsampled each image by 2:1
used pyramid_down<6>
rotated right 27 degrees
loss per missed target: 1
epsilon: 0.05
padding: 0
detection window size: 80 80
C: 700
nuclear norm regularizer: 9
cell_size: 8
num filters: 89
num images: 4748
Train detector (precision,recall,AP): 1 0.897369 0.897369
singular value threshold: 0.15
</code></pre>
<p>What the parameters are and how to set them is all explained in the dlib documentation. There is also a paper that describes the training algorithm: <a href="https://arxiv.org/abs/1502.00046" rel="nofollow noreferrer">Max-Margin Object Detection</a>.</p>
<p>Yes, it can take a lot of RAM to run the trainer. </p> | 2017-06-29 09:52:43.473000+00:00 | 2017-06-29 09:52:43.473000+00:00 | null | null | 44,817,348 | <p>I am trying to reproduce the training process of dlib's frontal_face_detector().
I am using the very same dataset (from
<a href="http://dlib.net/files/data/dlib_face_detector_training_data.tar.gz" rel="nofollow noreferrer">http://dlib.net/files/data/dlib_face_detector_training_data.tar.gz</a>) as dlib say they used, by union of frontal and profile faces + their reflections.</p>
<p>My problems are:
1. Very high memory usage for the whole dataset (30+Gb)
2. Training on partial dataset does not yield very high recall rate, 50-60 percent as compared to frontal_face_detector's 80-90 (testing on sub-set of images not used for training).
3. The detectors work badly on low resolution images and thus fail in detecting faces that are more than 1-1.5 meters deep.
4. Training run time increases significantly with SVM's C parameter that I have to increase to achieve better recall rate (I suspect that this is just overfitting artifact) </p>
<p>My original motivation in trainig was
a. gaining the ability to adapt to the specific environment where the camera is installed by e.g. hard negative mining.
b. improving detection in depth + run time by reducing the 80x80 window to 64x64 or even 48x48.</p>
<p>Am I on the right path? Do I miss anything? Please help...</p> | 2017-06-29 05:59:04.913000+00:00 | 2017-06-29 09:52:43.473000+00:00 | null | face-detection|dlib | ['http://dlib.net/dlib/image_processing/frontal_face_detector.h.html', 'https://arxiv.org/abs/1502.00046'] | 2 |
64,286,060 | <blockquote>
<p>energy is lost is generated when bits are zeroed</p>
</blockquote>
<p>Any irreversible process (i.e. process that lost information) accompanies energy dissipation. For example, the <code>x^2</code> function is not reversible since it is not a bijection, to implement this function, you should either</p>
<ul>
<li>erase some information and dissipate a certain amount of energy,</li>
<li>or implement (x, 0) -> (x, x^2) instead.</li>
</ul>
<blockquote>
<p>Is there any programming platform (library, runtime, language, and compiler) that supports reversible computing?</p>
</blockquote>
<p><a href="https://github.com/GiggleLiu/NiLang.jl" rel="nofollow noreferrer">NiLang</a> is an open source, embedded domain specific reversible programming language in Julia. This eDSL can be used for <a href="https://arxiv.org/abs/2003.04617" rel="nofollow noreferrer">programming language level automatic differentiation</a> and its performance is good.</p> | 2020-10-09 19:15:51.980000+00:00 | 2020-10-09 19:15:51.980000+00:00 | null | null | 10,314,090 | <p>From the <a href="http://www.cise.ufl.edu/research/revcomp/faq.html" rel="noreferrer">Reversible Computing</a> FAQ:</p>
<blockquote>
<p>Achieving the maximum possible computational performance for a given
rate of bit dissipation generally requires explicit reversibility not
only at the lowest level, but at all levels of computing--in devices,
circuits, architectures, languages, and algorithms (a strongly
conjectured, but not yet formally proven result-call it Frank's Law).</p>
</blockquote>
<p>As I understand it, energy is lost is generated when bits are zeroed. Heat production can be reduced if the software and hardware platform have the ability to reverse logical operations. </p>
<p>Is there any programming platform (library, runtime, language, and compiler) that supports reversible computing?</p> | 2012-04-25 10:47:29.360000+00:00 | 2020-10-09 19:15:51.980000+00:00 | null | programming-languages|functional-programming|runtime|compiler-theory | ['https://github.com/GiggleLiu/NiLang.jl', 'https://arxiv.org/abs/2003.04617'] | 2 |
36,927,993 | <p>I'm not aware of any research specifically dealing with quantization of input data, but you may want to check out some related work on quantization of CNN parameters: <a href="http://arxiv.org/pdf/1512.06473v2.pdf" rel="nofollow">http://arxiv.org/pdf/1512.06473v2.pdf</a>. Depending on what your end goal is, the "Q-CNN" approach may be useful for you.</p>
<p>My own experience with using various quantizations of the input data for CNNs has been that there's a heavy dependency between the degree of quantization and the model itself. For example, I've played around with using various interpolation methods to reduce image sizes and reducing the color palette size, and in the end, I discovered that each variant required a different tuning of hyper-parameters to achieve optimal results. Generally, I found that minor quantization of data had a negligible impact, but there was a knee in the curve where throwing away additional information dramatically impacted the achievable accuracy. Unfortunately, I'm not aware of any way to determine what degree of quantization will be optimal without experimentation, and even deciding what's optimal involves a trade-off between efficiency and accuracy which doesn't necessarily have a one-size-fits-all answer.</p>
<p>On a theoretical note, keep in mind that CNNs need to be able to find useful, spatially-local features, so it's probably reasonable to assume that any encoding that disrupts the basic "structure" of the input would have a significantly detrimental effect on the accuracy achievable.</p> | 2016-04-29 01:14:39.623000+00:00 | 2016-04-29 01:14:39.623000+00:00 | null | null | 36,911,892 | <p>I have a set of data, 2D matrix (like Grey pictures).
And use CNN for classifier.
Would like to know if there is any study/experience on the accuracy impact
if we change the encoding from traditionnal encoding.</p>
<p>I suppose yes, question is rather which transformation of the encoding make the accuracy invariant, which one deteriorates....</p>
<p>To clarify, this concerns mainly the quantization process of the raw data into input data.</p>
<p>EDIT:</p>
<p>Quantize the raw data into input data is already a pre-processing of the data, adding or removing some features (even minor). It seems not very clear the impact in term of accuracy on this quantization process on real dnn computation.
Maybe, some research available.</p> | 2016-04-28 10:26:46.120000+00:00 | 2016-04-29 01:14:39.623000+00:00 | 2016-04-28 16:55:52.583000+00:00 | tensorflow|deep-learning|conv-neural-network|keras | ['http://arxiv.org/pdf/1512.06473v2.pdf'] | 1 |
46,844,425 | <ol>
<li>you can do <a href="https://arxiv.org/pdf/1710.09282.pdf" rel="nofollow noreferrer">network compression</a>.</li>
<li>cut the sentence into pieces by <a href="http://www.aclweb.org/anthology/P16-1162" rel="nofollow noreferrer">byte-pair-encoding</a> or <a href="https://arxiv.org/abs/1804.10959" rel="nofollow noreferrer">unigram language model</a> and etc and then try <a href="https://arxiv.org/pdf/1503.00075.pdf" rel="nofollow noreferrer">TreeLSTM</a>.</li>
<li>you can try faster softmax like adaptive softmax](<a href="https://arxiv.org/pdf/1609.04309.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1609.04309.pdf</a>)</li>
<li>try <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/keras/layers/CuDNNLSTM" rel="nofollow noreferrer">cudnnLSTM</a></li>
<li>try <a href="https://papers.nips.cc/paper/6613-dilated-recurrent-neural-networks.pdf" rel="nofollow noreferrer">dilated RNN</a></li>
<li>switch to CNN like dilated CNN, or BERT for parallelization and more efficient GPU support</li>
</ol> | 2017-10-20 07:39:31.927000+00:00 | 2020-09-29 02:03:05.060000+00:00 | 2020-09-29 02:03:05.060000+00:00 | null | 46,841,443 | <p>We've trained a tf-seq2seq model for question answering. The main framework is from <a href="https://github.com/google/seq2seq" rel="nofollow noreferrer">google/seq2seq</a>. We use bidirectional RNN( GRU encoders/decoders 128units), adding soft attention mechanism.</p>
<p>We limit maximum length to 100 words. It mostly just generates 10~20 words.</p>
<p>For model inference, we try two cases:</p>
<ol>
<li>normal(greedy algorithm). Its inference time is about 40ms~100ms</li>
<li>beam search. We try to use beam width 5, and its inference time is about 400ms~1000ms.</li>
</ol>
<p>So, we want to try to use beam width 3, its time may decrease, but it may also influence the final effect.</p>
<p>So are there any suggestion to decrease inference time for our case? Thanks.</p> | 2017-10-20 02:14:46.557000+00:00 | 2022-08-02 14:14:22.630000+00:00 | 2020-06-20 09:12:55.060000+00:00 | python|tensorflow|inference | ['https://arxiv.org/pdf/1710.09282.pdf', 'http://www.aclweb.org/anthology/P16-1162', 'https://arxiv.org/abs/1804.10959', 'https://arxiv.org/pdf/1503.00075.pdf', 'https://arxiv.org/pdf/1609.04309.pdf', 'https://www.tensorflow.org/api_docs/python/tf/compat/v1/keras/layers/CuDNNLSTM', 'https://papers.nips.cc/paper/6613-dilated-recurrent-neural-networks.pdf'] | 7 |
62,980,224 | <p>Three years later, we have what seems to be the start of solutions for this type of problem: sparse transformers.</p>
<p>See</p>
<p><a href="https://arxiv.org/abs/1904.10509" rel="nofollow noreferrer">https://arxiv.org/abs/1904.10509</a></p>
<p><a href="https://openai.com/blog/sparse-transformer/" rel="nofollow noreferrer">https://openai.com/blog/sparse-transformer/</a></p> | 2020-07-19 12:21:31.173000+00:00 | 2020-08-09 21:18:35.427000+00:00 | 2020-08-09 21:18:35.427000+00:00 | null | 44,478,272 | <p>I have some data that is sampled at at a very high rate (on the order of hundreds of times per second). This results in a sequence length that is huge (~90,000 samples) on average for any given instance. This entire sequence has a single label. I am trying to use an LSTM neural network to classify new sequences as one of these labels (multiclass classification). </p>
<p>However, using an LSTM with a such a large sequence length results in a network that is quite large.</p>
<p>What are some methods to effectively 'chunk' these sequences so that I could reduce the sequence length of the neural network, yet still maintain the information captured in the entire instance? </p> | 2017-06-10 21:33:18.173000+00:00 | 2020-08-09 21:18:35.427000+00:00 | null | neural-network|lstm | ['https://arxiv.org/abs/1904.10509', 'https://openai.com/blog/sparse-transformer/'] | 2 |
9,511,621 | <p>check out Aydın Buluc, John R. Gilbert: <a href="https://arxiv.org/pdf/1006.2183" rel="nofollow noreferrer">Highly Parallel Sparse Matrix-Matrix Multiplication</a></p> | 2012-03-01 06:28:18.657000+00:00 | 2022-03-08 05:54:10.857000+00:00 | 2022-03-08 05:54:10.857000+00:00 | null | 8,517,152 | <p>I am optimizing code which heavily relies on a custom made Matrix library, (which won't be excluded from the project because it is everywhere. This is not nice, but it's a fact...) Many calculations are done with matrices of 10-20 rows and columns, many computations include a quadratic form like</p>
<pre><code>C = A*B*A'
</code></pre>
<p>I realized that often A is sparse and I would like to make use of this fact. So I am looking for an algorithm that would handle this case. Numerical stability is important. Is there anything I can use? (I didn't write our library so I don't know if there are any pitfalls I should take into account?)</p>
<p>As "our" simple O(n³) multiplication method executes faster than Eigen 3 on the target platform, as I need numerical stability and the matrices aren't very big, I guess that Strassen's algorithm as well as Coppersmith–Winograd algorithm aren't what I am looking for. Instead it's just the quadratic form multiplication in a way that lets me easily check for zeros in A.</p> | 2011-12-15 08:31:10.387000+00:00 | 2022-03-08 05:54:10.857000+00:00 | 2022-03-07 22:08:01.327000+00:00 | c++|algorithm|math|sparse-matrix|matrix-multiplication | ['https://arxiv.org/pdf/1006.2183'] | 1 |
72,736,376 | <p>The reason is that <code>random_configuration_model</code> uses a rejection sampling approach to generate graphs.</p>
<p>You can already see it on quite star on 25 nodes:</p>
<pre><code>julia> @time random_configuration_model(25, [24; fill(1, 24)])
14.668509 seconds (134.34 M allocations: 16.465 GiB, 10.71% gc time)
{25, 24} undirected simple Int64 graph
julia> @time random_configuration_model(25, [24; fill(1, 24)])
2.242426 seconds (20.41 M allocations: 2.501 GiB, 10.54% gc time)
{25, 24} undirected simple Int64 graph
julia> @time random_configuration_model(25, [24; fill(1, 24)])
14.490126 seconds (130.53 M allocations: 15.999 GiB, 10.77% gc time)
{25, 24} undirected simple Int64 graph
</code></pre>
<p>Rejection sampling is problematic when degree sequence is almost non graphic (as then the probability to "hit" a simple graph is low).</p>
<p>Faster approaches than rejection sampling are available if you can accept a small deviation from uniform sampling but AFAICT are not implemented in Graphs.jl. One such method that is popular is <a href="https://arxiv.org/abs/cs/0502085" rel="nofollow noreferrer">https://arxiv.org/abs/cs/0502085</a> which additionally puts a restriction that the graph should be connected. This method is available in iGraph.</p> | 2022-06-23 20:46:54.940000+00:00 | 2022-06-23 20:46:54.940000+00:00 | null | null | 72,732,571 | <p>I'm facing problems with the generation of configurational graphs on LightGraphs. Hereafter, the vector <code>E</code> contains the sequence of edges. I have to generate this kind of graph iteratively inside a loop and the example below reproduces the problem.</p>
<pre><code>using LightGraphs, Distributions
N=2000;c=0.01*N
α=0.625
p = α/(c+α)
E = zeros(Int64,N)
for j in 1:100
s=0
for i in 1:N
E[i] = rand(NegativeBinomial(α,p))
s += E[i]
end
if iseven(s) == false
k = rand(DiscreteUniform(1,N))
E[k] += 1
end
@show s
g = random_configuration_model(N,E)
@show j
end
</code></pre>
<p>At some iteration step <code>j</code>, it seems that <code>g = random_configuration_model(N,E)</code> takes unexpected (very) long time to run, since the variables that determine the complexity (<code>N</code> and <code>c</code>) remain of the same order. Making sure that the sequence is graphical with check_graphical=true doesn't help and the problem also happens. It happens only for small values of <code>α</code> (<1), but this parameter only affects the variance of the negative binomial distribution, and not its mean value, that is approximately <code>c</code> for finite <code>N</code>. Does anyone know something that may be causing this problem?</p>
<p>Edit: as a matter of completeness, I leave below how I generated the configuration random graph with iGraph (full doc: <a href="https://igraph.org/" rel="nofollow noreferrer">https://igraph.org/</a>). One can find how to transform the iGraph object <code>g2</code> to a LightGraph object (and more on general usage) at <a href="https://bkamins.github.io/julialang/2021/04/09/igraph.html" rel="nofollow noreferrer">this</a> tutorial by Bogumił Kamiński.</p>
<pre><code>using LightGraphs, PyCall, Distributions
ig = pyimport("igraph")
s=0;N=1000;c=N*0.01;α=0.625;p=α/(c+α)
E=zeros(Int64,N)
for i in 1:N
E[i] = rand(NegativeBinomial(α,p))
s += E[i]
end
if iseven(s) == false
k = rand(DiscreteUniform(1,N))
E[k] += 1
end
g2 = ig.Graph.Realize_Degree_Sequence(E)
</code></pre> | 2022-06-23 15:13:52.527000+00:00 | 2022-06-29 17:39:56.360000+00:00 | 2022-06-29 17:39:56.360000+00:00 | time|julia|lightgraphs | ['https://arxiv.org/abs/cs/0502085'] | 1 |
38,140,870 | <p>The keyword is <a href="https://en.wikipedia.org/wiki/Automatic_summarization" rel="nofollow"><strong>Automatic Summarization</strong></a>.</p>
<p>Generally, there are two approaches to automatic summarization: <strong><em>extraction</em></strong> and <strong><em>abstraction</em></strong>.</p>
<ul>
<li>Extractive methods work by selecting a subset of existing words, phrases, or sentences in the original text to form the summary.</li>
<li>Abstractive methods build an internal semantic representation and then use natural language generation techniques to create a summary that is closer to what a human might generate.</li>
</ul>
<p>Abstractive summarization is a lot more difficult. An interesting, approach is described in <a href="http://arxiv.org/abs/1509.00685" rel="nofollow">A Neural Attention Model for Abstractive Sentence Summarization</a> by Alexander M. Rush, Sumit Chopra, Jason Weston (source code based on the paper <a href="https://github.com/facebook/NAMAS" rel="nofollow">here</a>).</p>
<p>A "simple" approach is used in Word (<a href="https://support.office.com/en-us/article/Automatically-summarize-a-document-b43f20ae-ec4b-41cc-b40a-753eed6d7424" rel="nofollow">AutoSummary Tool</a>): </p>
<blockquote>
<p>AutoSummarize determines key points by analyzing the document and assigning a score to each sentence. Sentences that contain words used frequently in the document are given a higher score. You then choose a percentage of the highest-scoring sentences to display in the summary.</p>
<p>You can select whether to highlight key points in a document, insert an executive summary or abstract at the top of a document, create a new document and put the summary there, or hide everything but the summary.</p>
<p>If you choose to highlight key points or hide everything but the summary, you can switch between displaying only the key points in a document (the rest of the document is hidden) and highlighting them in the document. As you read, you can also change the level of detail at any time.</p>
</blockquote>
<p>Anyway automatic data (text) summarization is an active area of machine learning / data mining with many ongoing researches. You should start reading some good overviews:</p>
<ul>
<li><a href="http://research.nii.ac.jp/ntcir/workshop/OnlineProceedings2/sum-mani.pdf" rel="nofollow">Summarization evaluation: an overview</a> by Inderjeet Mani.</li>
<li><a href="https://www.cs.cmu.edu/~afm/Home_files/Das_Martins_survey_summarization.pdf" rel="nofollow">A Survey on Automatic Text Summarization</a> by Dipanjan Das André F.T. Martins (emphasizes extractive approaches to summarization using statistical methods).</li>
</ul> | 2016-07-01 08:43:18.217000+00:00 | 2016-07-01 08:43:18.217000+00:00 | null | null | 38,139,321 | <p>I want to write a Learning Algo which can automatically create summary of articles .</p>
<p>e.g, there are some fiction novels(one category considering it as a filter) in PDF format. I want to make an automated process of creating its summary.
We can provide some sample data to implement it in supervised learning approach.
Kindly suggest me how can i implement this properly.</p>
<p>I am a beginner & am pursuing Andrew Ng course and aware of some common algorithms(linear reg, logistic , neural net) + Udacity Statistics courses and ready to dive more into NLP , Deep learning etc. but motive is to solve this. :)
Thanks in advance </p> | 2016-07-01 07:17:35.583000+00:00 | 2016-07-01 08:43:18.217000+00:00 | null | machine-learning|nlp|artificial-intelligence|deep-learning|supervised-learning | ['https://en.wikipedia.org/wiki/Automatic_summarization', 'http://arxiv.org/abs/1509.00685', 'https://github.com/facebook/NAMAS', 'https://support.office.com/en-us/article/Automatically-summarize-a-document-b43f20ae-ec4b-41cc-b40a-753eed6d7424', 'http://research.nii.ac.jp/ntcir/workshop/OnlineProceedings2/sum-mani.pdf', 'https://www.cs.cmu.edu/~afm/Home_files/Das_Martins_survey_summarization.pdf'] | 6 |
16,744,418 | <p>Yes, GCC and Visual Studio C++ have different <code>long double</code> types. On GCC generating code for x86, <code>long double</code> is a 80-bit double-extended IEEE 754 format(*), whereas Visual Studio C++ treats <code>long double</code> like a 64-bit double-precision IEEE 754 format(**).</p>
<p>So <code>(long double)t</code> does not have to be the same number on both platforms, and the division is not the same either. Although you have tagged your question “integer-division”, it is a floating-point division between different floating-point types.</p>
<p>(*) almost: it behaves very very much like a 79-bit <a href="http://en.wikipedia.org/wiki/IEEE_754-1985" rel="nofollow noreferrer">IEEE 754</a> type with 15 exponent bits and 63 significand bits would, but it has a slightly wider exponent range since it uses an explicit bit for the leading 1 in the significand.</p>
<p>(**) almost: because the compiler generates instructions that use the historical x87 instructions after having configured the x87 for 53-bit significands, denormal results may be double-rounded (<a href="http://arxiv.org/abs/cs/0701192" rel="nofollow noreferrer">reference</a>).</p> | 2013-05-24 22:20:23.723000+00:00 | 2022-06-20 12:46:00.453000+00:00 | 2022-06-20 12:46:00.453000+00:00 | null | 16,744,245 | <p>I have coded following algorithm to convert a decimal value into Binary/Hexadecimal etc..</p>
<pre><code>string toFormatFromDecimal(long long t, Format format) {
int digitCount = ceil(log(t) / log((int) format));
string hex = "";
for (int i = 0; i < digitCount; i++) {
long double cur = (long double)t / (long double)(format);
long long ganzzahl = (long long) cur;
long double kommazahl = cur - ganzzahl;
hex += digits[(long long) (kommazahl * format)];
t = ganzzahl;
}
return string(hex.rbegin(), hex.rend());
}
</code></pre>
<p>I use GCC in linux and Visual Studio c++ compiler on Windows
It seems that i got different values at the "integer" Division here:</p>
<pre><code>long long ganzzahl = (long long) cur;
</code></pre>
<p>Any Idea how this could happen? are there different precissions on Linux and Windows?</p>
<p>Thanks
Florian</p>
<p>--Solution--</p>
<pre><code>string toFormatFromDecimal(long long t, Format format) {
int digitCount = ceil(log(t) / log((int) format));
string hex = "";
for (int i = 0; i < digitCount; i++) {
hex += digits[(int) (t%format)];
t = t/format;
}
return string(hex.rbegin(), hex.rend());
}
</code></pre> | 2013-05-24 22:03:24.370000+00:00 | 2022-06-20 12:46:00.453000+00:00 | 2013-05-25 11:13:25.143000+00:00 | c++|linux|windows|gcc|floating-point | ['http://en.wikipedia.org/wiki/IEEE_754-1985', 'http://arxiv.org/abs/cs/0701192'] | 2 |
61,772,752 | <p>U-Net is used for semantic segmentation whereas VGG-16 is used for classification.</p>
<p><a href="https://i.stack.imgur.com/8lc3x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8lc3x.png" alt="enter image description here"></a></p>
<p>usually, with U-Net we predict a single-channel mask. The input can have 3 channels but for generating binary masks we almost always use signle channel output.</p>
<p>VGG on the other hand just gives a softmax probability for the input image which we use to decide which class an image belongs to.</p>
<p><a href="https://i.stack.imgur.com/UVrBO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UVrBO.png" alt="enter image description here"></a></p>
<p>Finally, in keras/tensorflow, <code>kernel_size = 3</code> and <code>kernel_size = (3,3)</code> is equivalent. For Conv2D, you always need 2-d kernels, when we pass an integer like <code>3</code>, keras uses same dimensional kernels meaning the width and height of the kernel is 3.</p>
<p><a href="https://i.stack.imgur.com/LFh5f.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LFh5f.png" alt="enter image description here"></a></p>
<p>So, <code>kernel_size</code> has no relations with channels, kernels are related with the spatial dimension (width and height of images/feature maps), number of filters is related with channels.</p>
<p>Relevant papers and code:</p>
<p><a href="https://arxiv.org/pdf/1411.4038.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1411.4038.pdf</a> (vgg16 segmentation)</p>
<p><a href="https://courses.cs.washington.edu/courses/cse576/17sp/notes/Sachin_Talk.pdf" rel="nofollow noreferrer">https://courses.cs.washington.edu/courses/cse576/17sp/notes/Sachin_Talk.pdf</a> (encode-decoder architecture)</p>
<p><a href="https://arxiv.org/pdf/1511.00561.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1511.00561.pdf</a> (vgg16 encode-decoder (+skip connections) for segmentation)</p>
<p><a href="https://github.com/divamgupta/image-segmentation-keras" rel="nofollow noreferrer">https://github.com/divamgupta/image-segmentation-keras</a></p>
<p><a href="https://github.com/upul/Semantic_Segmentation" rel="nofollow noreferrer">https://github.com/upul/Semantic_Segmentation</a></p> | 2020-05-13 10:54:14.727000+00:00 | 2020-05-13 11:17:49.567000+00:00 | 2020-05-13 11:17:49.567000+00:00 | null | 61,772,105 | <p>I've just started to learn CNN with Tensorflow and Keras.</p>
<p>I have found these two implementations, the first is for U-NET and the second one is for VGG-16:</p>
<pre><code>def unet(pretrained_weights = None,input_size = (256,256,1)):
inputs = Input(input_size)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
</code></pre>
<p>VGG-16:</p>
<pre><code>def vgg16(input_size = (224,224,3)):
model = Sequential()
model.add(Conv2D(input_shape=input_size,filters=64,kernel_size=(3,3),padding="same", activation="relu"))
model.add(Conv2D(filters=64,kernel_size=(3,3),padding="same", activation="relu"))
</code></pre>
<p>I have notices that on U-NET, they are using one channel images, and VGG-16, they are using three channels images. And also, U-NET Conv2D layer use <code>kernel_size</code> equals to 3, and on VGG-16, equals to (3, 3).</p>
<p>Is there any relationship between that with one channel image, they use 1D kernel size, and with three channels image, they use 2D kernel size?</p> | 2020-05-13 10:22:26.163000+00:00 | 2020-05-13 11:17:49.567000+00:00 | 2020-05-13 10:48:10.313000+00:00 | tensorflow|keras|computer-vision|conv-neural-network | ['https://i.stack.imgur.com/8lc3x.png', 'https://i.stack.imgur.com/UVrBO.png', 'https://i.stack.imgur.com/LFh5f.png', 'https://arxiv.org/pdf/1411.4038.pdf', 'https://courses.cs.washington.edu/courses/cse576/17sp/notes/Sachin_Talk.pdf', 'https://arxiv.org/pdf/1511.00561.pdf', 'https://github.com/divamgupta/image-segmentation-keras', 'https://github.com/upul/Semantic_Segmentation'] | 8 |
53,873,585 | <p>The architectures you mention are really loose <em>families</em> of architecture. Performance depends on the details and (of course) the task. Moreover, the two styles are often combined in various ways, so it isn't really an "either-or" choice.</p>
<p>Nevertheless, at the time of writing the CNN-like <a href="https://arxiv.org/abs/1810.04805" rel="nofollow noreferrer">BERT</a> and RNN-like <a href="https://arxiv.org/abs/1802.05365" rel="nofollow noreferrer">ELMo</a> architectures are popular. Pre-trained models and code are available for both and they both perform well across a variety of tasks, including classification. Why not try them both?</p> | 2018-12-20 17:37:09.613000+00:00 | 2018-12-20 17:37:09.613000+00:00 | null | null | 53,865,657 | <p>I have tried and searched, found that RNNs give better results. Which to use: LSTMs or GRU or traditional RNN or CNN?</p> | 2018-12-20 09:21:18.343000+00:00 | 2018-12-24 10:36:03.330000+00:00 | 2018-12-20 15:22:40.360000+00:00 | neural-network|deep-learning|nlp|lstm|recurrent-neural-network | ['https://arxiv.org/abs/1810.04805', 'https://arxiv.org/abs/1802.05365'] | 2 |
69,986,011 | <p>First thing, the "^" is wrong - as that would only match text at the begining of the line. The * after the g also does not mean "anything" - but "any number of g".</p>
<p>You could try with
re_res=re.findall("https://arxiv.org/[^\s\)\]]*[0-9]"</p>
<p>It is not fool proof but should cover most cases.</p>
<p>find <code>https://arxiv.org/</code>
that is followed by anything (<code>*</code>) not (<code>[^...]</code> space <code>\s</code> closing round or square brackets(<code>\)\]</code>) (together <code>[^\\s\\\)\\]]*</code>) but ends whith a number (<code>[0-9]</code>).</p> | 2021-11-16 08:37:39.847000+00:00 | 2021-11-16 08:37:39.847000+00:00 | null | null | 69,985,696 | <p>I have paragraph and from it I want to extract the arxiv dois. For example, this is the given paragraph:</p>
<pre><code>"Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged.
Further information can be referred to this [arXiv article]`(https://arxiv.org/abs/2109.05857).`"
</code></pre>
<p>The output should be: <code>https://arxiv.org/abs/2109.05857</code>.
The start of arxiv doi will always be "https://arxiv.org" or "arxiv.org" and the appeded string could be anything.</p>
<p>I tried <code>exp = re.findall("^https://arxiv.org*", str)</code> but it doesn't work.</p>
<p>Any help will be appreciating.</p> | 2021-11-16 08:10:49.700000+00:00 | 2021-11-16 09:15:51.127000+00:00 | 2021-11-16 08:23:18.337000+00:00 | python-3.x|regex | [] | 0 |
69,986,446 | <pre><code>import re
string = """Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged.
Further information can be referred to this [arXiv article]`(https://arxiv.org/abs/2109.05857).`"""
pattern=re.compile('\((https://arxiv.org[^\)]+|arxiv.org[^\)]+)', re.MULTILINE)
pattern.findall(string)
#output
#['https://arxiv.org/abs/2109.05857']
</code></pre> | 2021-11-16 09:15:51.127000+00:00 | 2021-11-16 09:15:51.127000+00:00 | null | null | 69,985,696 | <p>I have paragraph and from it I want to extract the arxiv dois. For example, this is the given paragraph:</p>
<pre><code>"Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged.
Further information can be referred to this [arXiv article]`(https://arxiv.org/abs/2109.05857).`"
</code></pre>
<p>The output should be: <code>https://arxiv.org/abs/2109.05857</code>.
The start of arxiv doi will always be "https://arxiv.org" or "arxiv.org" and the appeded string could be anything.</p>
<p>I tried <code>exp = re.findall("^https://arxiv.org*", str)</code> but it doesn't work.</p>
<p>Any help will be appreciating.</p> | 2021-11-16 08:10:49.700000+00:00 | 2021-11-16 09:15:51.127000+00:00 | 2021-11-16 08:23:18.337000+00:00 | python-3.x|regex | [] | 0 |
67,166,871 | <p>As far as my research goes the best architecture for making a TTS engine currently is Tacotron 2[<a href="https://arxiv.org/pdf/1712.05884.pdf" rel="nofollow noreferrer">Paper here</a>], a neural network architecture for
speech synthesis directly from text (can easily capture via <a href="https://en.wikipedia.org/wiki/Optical_character_recognition" rel="nofollow noreferrer">OCR</a>). It has achieved a <a href="https://en.wikipedia.org/wiki/Mean_opinion_score#:%7E:text=Mean%20opinion%20score%20(MOS)%20is,of%20a%20stimulus%20or%20system.&text=MOS%20is%20a%20commonly%20used,not%20restricted%20to%20those%20modalities." rel="nofollow noreferrer">MOS(mean opinion score)</a> of 4.53 comparable to a MOS of 4.58 for professionally recorded speech. The official implementation of Tacotron 2 is not public but there is a tensorflow implementation made using tensorflow 1.15.0 <a href="https://github.com/Rayhane-mamah/Tacotron-2" rel="nofollow noreferrer">here</a>. There is also a pytorch implementation by nvidia <a href="https://github.com/NVIDIA/tacotron2" rel="nofollow noreferrer">here</a> which is more currently maintained. Both implementations can be retrained using dataset for a new language(language with no TTS implementation yet) for easy implementation of a TTS engine. You can also use the architectures above as a stepping stone to build your own architecture.</p> | 2021-04-19 17:46:47.180000+00:00 | 2021-04-19 17:53:33.780000+00:00 | 2021-04-19 17:53:33.780000+00:00 | null | 7,223,680 | <p>As I know, TTS needs TTS engine to speak one language. In Android emulator 2.2, Pico TTS engine is default. It has only some popular languages. I can see some engines on Market which must be purchased to install. My question: <strong>is there any way to create a custom engine which support other languages?(by programming or using software)</strong> </p>
<p>(I don't know if I should post this question in StackOverflow or SuperUser. If wrong place, please migrate it)</p> | 2011-08-28 20:20:26.943000+00:00 | 2022-07-16 08:37:51.113000+00:00 | null | android|text-to-speech | ['https://arxiv.org/pdf/1712.05884.pdf', 'https://en.wikipedia.org/wiki/Optical_character_recognition', 'https://en.wikipedia.org/wiki/Mean_opinion_score#:%7E:text=Mean%20opinion%20score%20(MOS)%20is,of%20a%20stimulus%20or%20system.&text=MOS%20is%20a%20commonly%20used,not%20restricted%20to%20those%20modalities.', 'https://github.com/Rayhane-mamah/Tacotron-2', 'https://github.com/NVIDIA/tacotron2'] | 5 |
60,700,254 | <p>Here is the full leaderboard of GAN for CIFAR10(<a href="https://paperswithcode.com/sota/image-generation-on-cifar-10" rel="nofollow noreferrer">link</a>). It is tested by Inception Score. </p>
<p>The current best method(or state of the art) is NCSN(paper :<a href="https://arxiv.org/pdf/1907.05600v2.pdf" rel="nofollow noreferrer">link</a>, code: <a href="https://github.com/ermongroup/ncsn" rel="nofollow noreferrer">link</a>).</p> | 2020-03-16 04:14:51.487000+00:00 | 2020-03-16 04:14:51.487000+00:00 | null | null | 60,681,913 | <p>I want to know best GAN model for training cifar10.
I searched lots of models like DCGAN, WGAN, CGAN, SSGAN, SNGAN but it seems like I want better one.
Could you tell me what is best based on your experience or FID, IS score.</p>
<p>Thank you.</p> | 2020-03-14 10:47:51.247000+00:00 | 2020-03-16 04:14:51.487000+00:00 | null | machine-learning|deep-learning|generative-adversarial-network|dcgan | ['https://paperswithcode.com/sota/image-generation-on-cifar-10', 'https://arxiv.org/pdf/1907.05600v2.pdf', 'https://github.com/ermongroup/ncsn'] | 3 |
48,251,368 | <blockquote>
<p>In the context of NLP, that means that sequences with variable lengths do not necessarily need to be padded to the same length.</p>
</blockquote>
<p>This means that you don't need to pad sequences <strong>unless you are doing data batching</strong> which is currently the only way to add parallelism in PyTorch. DyNet has a method called <a href="https://dynet.readthedocs.io/en/latest/tutorials_notebooks/Autobatching.html" rel="nofollow noreferrer">autobatching</a> (which is described in detail <a href="https://arxiv.org/abs/1705.07860" rel="nofollow noreferrer">in this paper</a>) that does batching on the graph operations instead of the data, so this might be what you want to look into.</p>
<blockquote>
<p>But, if I want to use PyTorch DataLoader, I need to pad my sequences anyway because the DataLoader only takes tensors - given that me as a total beginner does not want to build some customized collate_fn.</p>
</blockquote>
<p>You can use the <code>DataLoader</code> given you write your own <code>Dataset</code> class and you are using <code>batch_size=1</code>. The twist is to use numpy arrays for your variable length sequences (otherwise <code>default_collate</code> will give you a hard time):</p>
<pre><code>from torch.utils.data import Dataset
from torch.utils.data.dataloader import DataLoader
class FooDataset(Dataset):
def __init__(self, data, target):
assert len(data) == len(target)
self.data = data
self.target = target
def __getitem__(self, index):
return self.data[index], self.target[index]
def __len__(self):
return len(self.data)
data = [[1,2,3], [4,5,6,7,8]]
data = [np.array(n) for n in data]
targets = ['a', 'b']
ds = FooDataset(data, targets)
dl = DataLoader(ds, batch_size=1)
print(list(enumerate(dl)))
# [(0, [
# 1 2 3
# [torch.LongTensor of size 1x3]
# , ('a',)]), (1, [
# 4 5 6 7 8
# [torch.LongTensor of size 1x5]
# , ('b',)])]
</code></pre>
<blockquote>
<p>Now this makes me wonder - doesn’t this wash away the whole advantage of dynamic computational graphs in this context?</p>
</blockquote>
<p>Fair point but the main strength of dynamic computational graphs are (at least currently) mainly the possibility of using debugging tools like pdb which rapidly decrease your development time. Debugging is way harder with static computation graphs. There is also no reason why PyTorch would not implement further just-in-time optimizations or a concept similar to DyNet's auto-batching in the future.</p>
<blockquote>
<p>Also, if I pad my sequences to feed it into the DataLoader as a tensor with many zeros as padding tokens at the end [...], will it have any negative effect on my training [...]?</p>
</blockquote>
<p>Yes, both in runtime and for the gradients. The RNN will iterate over the padding just like normal data which means that you have to deal with it in some way. PyTorch supplies you with tools for dealing with padded sequences and RNNs, namely <a href="http://pytorch.org/docs/0.3.0/nn.html?highlight=packed%20sequence#torch.nn.utils.rnn.pad_packed_sequence" rel="nofollow noreferrer"><code>pad_packed_sequence</code></a> and <a href="http://pytorch.org/docs/0.3.0/nn.html?highlight=packed%20sequence#pack-padded-sequence" rel="nofollow noreferrer"><code>pack_padded_sequence</code></a>. These will let you ignore the padded elements during RNN execution, but beware: this does not work with RNNs that you implement yourself (or at least not if you don't add support for it manually).</p> | 2018-01-14 15:55:07.120000+00:00 | 2018-01-14 23:04:33.617000+00:00 | 2018-01-14 23:04:33.617000+00:00 | null | 48,244,053 | <p>As far as I understand, the strength of PyTorch is supposed to be that it works with dynamic computational graphs. In the context of NLP, that means that sequences with variable lengths do not necessarily need to be padded to the same length. But, if I want to use PyTorch DataLoader, I need to pad my sequences anyway because the DataLoader only takes tensors - given that me as a total beginner does not want to build some customized collate_fn.</p>
<p>Now this makes me wonder - doesn’t this wash away the whole advantage of dynamic computational graphs in this context?
Also, if I pad my sequences to feed it into the DataLoader as a tensor with many zeros as padding tokens at the end (in the case of word ids), will it have any negative effect on my training since PyTorch may not be optimized for computations with padded sequences (since the whole premise is that it can work with variable sequence lengths in the dynamic graphs), or does it simply not make any difference?</p>
<p>I will also post this question in the PyTorch Forum...</p>
<p>Thanks!</p> | 2018-01-13 20:33:00.247000+00:00 | 2018-01-14 23:04:33.617000+00:00 | null | nlp|deep-learning|padding|pytorch | ['https://dynet.readthedocs.io/en/latest/tutorials_notebooks/Autobatching.html', 'https://arxiv.org/abs/1705.07860', 'http://pytorch.org/docs/0.3.0/nn.html?highlight=packed%20sequence#torch.nn.utils.rnn.pad_packed_sequence', 'http://pytorch.org/docs/0.3.0/nn.html?highlight=packed%20sequence#pack-padded-sequence'] | 4 |
9,194,951 | <p>You may limit the number of retries when you are implementing <a href="http://arxiv.org/pdf/1001.3756.pdf" rel="nofollow">soft real-time systems</a> in which certain operations have to occur within a defined duration of time or at a specified time and if they don't, another action must be taken. <br><br>In such a case, you do not want other processes to wait while not being sure whether the transaction has been successful. However, several transaction contexts of mnesia may change the way transaction commits are done in your application. <br><br>
However, in my personal experience, real-time systems with mnesia ought to be handled using mnesia events, whereby, any write, update, delete, insert, e.t.c generates an instantaneous Event to all subscribed Processes/Servers, there and then. So those who receive this event message would take any action they desire possible.</p> | 2012-02-08 14:11:15.187000+00:00 | 2012-02-08 14:11:15.187000+00:00 | null | null | 9,193,441 | <p>Mnesia lets you limit the number of times it retries a transaction:</p>
<pre><code>MyFun = fun() -> ... end,
{atomic, ok} = mnesia:transaction(MyFun, [], 42)
</code></pre>
<p>If you don't specify a number, it defaults to <code>infinity</code>.</p>
<p>I've never seen any code that actually limits the number of retries. Have you? In what cases is it useful?</p> | 2012-02-08 12:37:49.140000+00:00 | 2012-02-08 14:11:15.187000+00:00 | null | erlang|mnesia | ['http://arxiv.org/pdf/1001.3756.pdf'] | 1 |
63,960,898 | <p>In DeepSORT, you need to have detection in order to perform tracking. It is a tracking-by-detection method. The detection results are input to the Kalman filter component of DeepSORT. The filter generates tracking predictions. Also, the bounding boxes from detection are used to extract crops of RoI from the input image. These image crops are used by the trained Siamese model for feature extraction. The feature extraction by the Siamese model helps in reducing ID Switch.</p>
<p>If you are only interested in doing tracking and ID switch in case of occlusion is not your concern then you can have look at <a href="https://arxiv.org/abs/2004.01177" rel="nofollow noreferrer">CenterTrack</a>. It does joint detection and tracking in a single model. In this case, you can avoid model training from scratch. The authors provide pre-trained models for tracking both pedestrians and vehicles. As compared to DeepSORT the CenterTrack is pretty fast. </p> | 2020-09-18 18:18:34.023000+00:00 | 2020-09-18 18:18:34.023000+00:00 | null | null | 63,774,168 | <p>I am new to computer vision, and I still didn't try any kind of neural network detections such as yolo, however, I am wishing to do object tracking before entering the field of detection. I started reading about deep sort and all the projects use deep learning detections that needs training. My question is, can I give an ROI result to my deep SORT tracker instead of detections using YOLO and it continues tracking the object selected with ROI.</p>
<p>Here is a link that i found information about the code of DeepSORT.<a href="https://nanonets.com/blog/object-tracking-deepsort/" rel="nofollow noreferrer">DeepSORT: Deep Learning to Track Custom Objects in a Video</a></p> | 2020-09-07 08:42:21.613000+00:00 | 2021-01-20 14:00:37.490000+00:00 | 2020-09-21 11:22:55.657000+00:00 | python|tracking | ['https://arxiv.org/abs/2004.01177'] | 1 |
26,674,808 | <p>Juan gave the correct answer. I'm filtering for Germany only using this:</p>
<pre><code># Bounding boxes for geolocations
# Online-Tool to create boxes (c+p as raw CSV): http://boundingbox.klokantech.com/
GEOBOX_WORLD = [-180,-90,180,90]
GEOBOX_GERMANY = [5.0770049095, 47.2982950435, 15.0403900146, 54.9039819757]
stream.filter(locations=GEOBOX_GERMANY)
</code></pre>
<p>This is a pretty crude box that includes parts of some other countries. If you want a finer grain you can combine multiple boxes to fill out the location you need.</p>
<p>It should be noted though that <strong>you limit the number of tweets quite a bit if you filter by geotags</strong>. This is from roughly 5 million Tweets from my test database (the query should return the %age of tweets that actually contain a geolocation):</p>
<pre><code>> db.tweets.find({coordinates:{$ne:null}}).count() / db.tweets.count()
0.016668392651547598
</code></pre>
<p>So only 1.67% of my sample of the 1% stream include a geotag. However there's other ways of figuring out a user's location:
<a href="http://arxiv.org/ftp/arxiv/papers/1403/1403.2345.pdf">http://arxiv.org/ftp/arxiv/papers/1403/1403.2345.pdf</a></p> | 2014-10-31 12:33:11.250000+00:00 | 2014-10-31 12:33:11.250000+00:00 | null | null | 22,889,122 | <p>I have found the following piece of code that works pretty well for letting me view in Python Shell the standard 1% of the twitter firehose:</p>
<pre><code>import sys
import tweepy
consumer_key=""
consumer_secret=""
access_key = ""
access_secret = ""
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
class CustomStreamListener(tweepy.StreamListener):
def on_status(self, status):
print status.text
def on_error(self, status_code):
print >> sys.stderr, 'Encountered error with status code:', status_code
return True # Don't kill the stream
def on_timeout(self):
print >> sys.stderr, 'Timeout...'
return True # Don't kill the stream
sapi = tweepy.streaming.Stream(auth, CustomStreamListener())
sapi.filter(track=['manchester united'])
</code></pre>
<p>How do I add a filter to only parse tweets from a certain location? Ive seen people adding GPS to other twitter related Python code but I cant find anything specific to sapi within the Tweepy module.</p>
<p>Any ideas?</p>
<p>Thanks</p> | 2014-04-06 01:57:40.767000+00:00 | 2020-01-28 03:16:29.993000+00:00 | null | python|twitter|tweepy | ['http://arxiv.org/ftp/arxiv/papers/1403/1403.2345.pdf'] | 1 |
39,467,376 | <p>One of Deep Learning based solution would be to use <a href="http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/" rel="nofollow">word embeddings</a> (which ideally should represent a word by a fixed dimensional vector such that similar words lie close in that embedding space and even vector operations like <code>Germany - Berlin ~= Italy - Rome</code> may hold), two famous word embeddings techniques are <a href="https://code.google.com/archive/p/word2vec/" rel="nofollow">Word2Vec</a> and <a href="http://www.aclweb.org/anthology/D14-1162" rel="nofollow">Glove</a>, another option is to represent a sentence by a fixed dimensional vector such that similar sentence lie close in that embedding space, check <a href="https://arxiv.org/abs/1506.06726" rel="nofollow">Skip-Thought vectors</a>. Until now we have only tried to represent text (words/sentences) in a more semantic numerical way, next step is to capture the meaning of the current context (paragraphs, documents), a very naive approach would be to just average word/sentence embeddings (you have to try this to see if it works or not), better way would be to use some kind of sequence model like <a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="nofollow">RNN</a> (actually <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="nofollow">LSTM</a> or <a href="https://arxiv.org/pdf/1412.3555v1.pdf" rel="nofollow">GRU</a>) to capture whatever has been said before. The problem in using sequence models is that it will need supervision (you should have a labelled data, but if you don't have it which I guess is the case), then just use sequence models in a <a href="https://en.wikipedia.org/wiki/Language_model" rel="nofollow">language modelling</a> setting and get the hidden representation of RNN/GRU/LSTM at last time step i.e after reading the last word or the aggregated word embeddings if you are using the naive approach. Once you have the hidden representation you may apply any clustering technique to cluster different paragraphs (you have to find the appropriate <a href="https://en.wikipedia.org/wiki/Metric_(mathematics)" rel="nofollow">distance metric</a>) or you can manually apply some distance metric and define or learn a threshold for similar paragraphs to be categorized as one.</p> | 2016-09-13 10:07:52.403000+00:00 | 2016-09-13 10:07:52.403000+00:00 | null | null | 39,462,199 | <p>I am trying to learn Natural Language Processing and an stuck with an open ended question. How do I club together sentences that mean the same. There can be a finite set of sentences that have the same meaning. What kind of algorithms do I use to club them?</p>
<p>For example: Consider the following sentences:</p>
<pre><code>There is a man. There is a lion. The lion will chase the man on seeing him. If the lion catches the man he dies.
There is a man and a lion. If the lion catches the man he dies. The lion will chase the man if he sees him.
You have a lion that chases men on seeing them. There is one man. If the lion catches the man he dies.
</code></pre>
<p>Basically what all these sentences say is this:</p>
<pre><code> 1 Lion. 1 Man. Lions chase men. If lion catches men the man dies.
</code></pre>
<p>I am unable to zero in on one category of Machine Learning or Deep Learning algorithm that would help me achieve something similar. Please guide me in the right direction or point me to some algorithms that are good enough to achieve this.</p>
<p>Another important factor is having a scale-able solution. There could be lots of such sentences out there. What happens then?</p>
<p>One possible solutions is:
Use the parts of speech and the relations between words in a sentence as features for some Machine Leaning algo. But will this be practical in a large set of sentences? Do we need to consider more things?</p> | 2016-09-13 04:23:37.877000+00:00 | 2016-09-13 10:07:52.403000+00:00 | null | machine-learning|nlp|deep-learning | ['http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/', 'https://code.google.com/archive/p/word2vec/', 'http://www.aclweb.org/anthology/D14-1162', 'https://arxiv.org/abs/1506.06726', 'http://karpathy.github.io/2015/05/21/rnn-effectiveness/', 'http://colah.github.io/posts/2015-08-Understanding-LSTMs/', 'https://arxiv.org/pdf/1412.3555v1.pdf', 'https://en.wikipedia.org/wiki/Language_model', 'https://en.wikipedia.org/wiki/Metric_(mathematics)'] | 9 |
49,897,091 | <p>A straight through estimator is a way of estimating gradients for a threshold operation in a neural network. The threshold could be as simple as the following function,</p>
<p><a href="https://i.stack.imgur.com/ECIIg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ECIIg.png" alt="enter image description here"></a></p>
<p>As we can see, the derivative of this threshold function will 0 and during back-propagation, the network will not learn anything since it gets 0 gradients and the weights won't get updated. </p>
<p>The concept of a straight through estimator is that you set the incoming gradients to a threshold function equal to it's outgoing gradients, disregarding the derivative of the threshold function itself. This has been shown to perform well in the results (Figure 2) in <a href="https://arxiv.org/pdf/1308.3432.pdf" rel="noreferrer">this</a> paper you have referenced.</p> | 2018-04-18 10:14:46.057000+00:00 | 2018-04-18 10:14:46.057000+00:00 | null | null | 38,361,314 | <p>I have seen straight through estimator (STE) in many Neural Network related papers e.g. <a href="http://arxiv.org/pdf/1308.3432v1.pdf" rel="noreferrer">this</a> and <a href="https://arxiv.org/pdf/1606.06160.pdf" rel="noreferrer">this</a>. But I cannot understand the concept. I wonder if anyone could explain STE or refer me to a simple resource? </p> | 2016-07-13 20:41:30.737000+00:00 | 2018-04-18 11:57:14.703000+00:00 | 2018-04-18 11:57:14.703000+00:00 | neural-network|backpropagation | ['https://i.stack.imgur.com/ECIIg.png', 'https://arxiv.org/pdf/1308.3432.pdf'] | 2 |
57,208,156 | <p>I'm pretty sure, based on the age of the post, you've already solved your problem. But just in case, here is my 2 cents. </p>
<p>You end up predicting the most frequent word. So, if you remove the stopwords, you'll be predicting the next most frequent word. There are two ways, that I know of, to deal with this problem.</p>
<p>First, you can use a loss that emphasizes the less frequent classes, or words in your case. Here is a <a href="https://arxiv.org/pdf/1708.02002.pdf" rel="nofollow noreferrer">research paper</a> introducing focal loss, and conveniently, a <a href="https://github.com/umbertogriffo/focal-loss-keras" rel="nofollow noreferrer">github</a> implementation of it for keras. </p>
<p>Another approach is to use class_weight in the fit function.</p>
<pre class="lang-py prettyprint-override"><code>model.fit(X, y, epochs=50, batch_size=128, callbacks=callbacks_list, class_weight=class_weight)
</code></pre>
<p>where you can set the weights for words with lower frequencies higher, e.g. inversely proportional to the frequencies. </p> | 2019-07-25 18:27:13.180000+00:00 | 2019-07-25 18:27:13.180000+00:00 | null | null | 43,413,812 | <p>I have created a model in keras using LSTM for predicting the next word given a sequence of words.Below is my code for the same:</p>
<pre><code> # Small LSTM Network to Generate Text for Alice in Wonderland
# load ascii text and covert to lowercase
filename = "wonderland.txt"
raw_text = open(filename).read()
raw_text = raw_text.lower()
print raw_text
# create mapping of unique words to integers
print raw_text
raw_text = re.sub(r'[^\w\s]','',raw_text)
raw_text = re.sub('[^a-z\ \']+', " ", raw_text)
words_unsorted=list(raw_text.split())
words= sorted(list(set(raw_text.split())))
word_to_int = dict((w, i) for i, w in enumerate(words))
int_to_word = dict((i, w) for i, w in enumerate(words))
#print word_to_int
n_words = len(words_unsorted)
n_vocab = len(words)
print "Total Words: ", n_words
print "Total Vocab: ", n_vocab
# prepare the dataset of input to output pairs encoded as integers
seq_length = 7
dataX = []
dataY = []
for i in range(0, n_words - seq_length, 1):
seq_in = words_unsorted[i:i + seq_length]
seq_out = words_unsorted[i + seq_length]
#print seq_in
dataX.append([word_to_int[word] for word in seq_in])
dataY.append(word_to_int[seq_out])
n_patterns = len(dataX)
print "Total Patterns: ", n_patterns
# reshape X to be [samples, time steps, features]
X = numpy.reshape(dataX, (n_patterns, seq_length, 1))
print X[0]
# normalize
X = X / float(n_vocab)
# one hot encode the output variable
y = np_utils.to_categorical(dataY)
# define the LSTM model
model = Sequential()
model.add(LSTM(256, input_shape=(X.shape[1], X.shape[2]), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(256))
model.add(Dropout(0.2))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
print model.summary()
# define the checkpoint
filepath="weights-improvement-{epoch:02d}-{loss:.4f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
# fit the model
model.fit(X, y, epochs=50, batch_size=128, callbacks=callbacks_list)
</code></pre>
<p>Issue is when I predict on a test sentence I always end up getting "and" as the next word prediction!Should I remove all stop words or something else?Further,I am training it for 20 epochs.</p> | 2017-04-14 14:44:23.573000+00:00 | 2019-07-25 18:27:13.180000+00:00 | null | deep-learning|keras|lstm|data-science | ['https://arxiv.org/pdf/1708.02002.pdf', 'https://github.com/umbertogriffo/focal-loss-keras'] | 2 |
38,332,538 | <p>It is not necessary that the input size of the CNN should be the same as that of the training data.</p>
<p>What input size to choose depends on what application you are using the CNN for. For example, for classification, a 32x32 image might give a good accuracy. But for something like segmentation, it will most probably not give a good output. That being said, using a higher resolution image will result in a slightly higher accuracy. Refer this <a href="https://arxiv.org/abs/1501.02876" rel="nofollow">paper</a>. So, if you afford the extra processing time, go for the higher resolution.</p> | 2016-07-12 15:01:06.833000+00:00 | 2016-07-12 15:01:06.833000+00:00 | null | null | 38,196,845 | <p>Should the input size of the CNN follow that of the training data ? For example, if my training data is of size 192 x 98 then what should the input size of my CNN be ? 192 x 192 ? 98 x 98 ? Would it be a bad idea if I use a 32x32 input CNN ?</p>
<p>I have sooooo many questions on the specifics of CNN but no one has the answer.</p> | 2016-07-05 06:45:41.297000+00:00 | 2016-07-12 15:01:06.833000+00:00 | null | conv-neural-network | ['https://arxiv.org/abs/1501.02876'] | 1 |
57,743,344 | <p>ZCA whitening is typically a preprocessing step, like center-reduction, which basically aims at making your data more NN-friendly (additional info below). As such, it is supposed to be applied once, right before training. </p>
<p>So right before you starts training your model with a given dataset <code>X</code>, compute the whitened dataset <code>Z</code>, which is simply the multiplication of <code>X</code> with the ZCA matrix <code>W_zca</code> that you can learn to compute <a href="https://stackoverflow.com/questions/31528800/how-to-implement-zca-whitening-python">here</a>. Then train your model on the whitened dataset.
Finally, you should have something that looks like this</p>
<pre><code>class MyModule(torch.nn.Module):
def __init__(self):
super(MyModule,self).__init__()
# Feel free to use something more useful than a simple linear layer
self._network = torch.nn.Linear(...)
# Do your stuff
...
def fit(self, inputs, labels):
""" Trains the model to predict the right label for a given input """
# Compute the whitening matrix and inputs
self._zca_mat = compute_zca(inputs)
whitened_inputs = torch.mm(self._zca_mat, inputs)
# Apply training on the whitened data
outputs = self._network(whitened_inputs)
loss = torch.nn.MSEloss()(outputs, labels)
loss.backward()
optimizer.step()
def forward(self, input):
# You always need to apply the zca transform before forwarding,
# because your network has been trained with whitened data
whitened_input = torch.mm(self._zca_mat, input)
predicted_label = self._network.forward(whitened_input)
return predicted_label
</code></pre>
<h1>Additional info</h1>
<p>Whitening your data means decorrelating its dimensions so that the correlation matrix of the whitened data is the identity matrix. It is a rotation-scaling operation (thus linear), and there are actually an infinity of possible ZCA transforms. To understand the maths behind ZCA, read <a href="https://arxiv.org/pdf/1512.00809.pdf" rel="nofollow noreferrer">this</a></p> | 2019-09-01 05:06:30.843000+00:00 | 2019-09-01 05:06:30.843000+00:00 | null | null | 57,709,758 | <p>I need to apply ZCA whitening in PyTorch. I think I have found a way this can be done by using transforms.LinearTransformation and I have found a test in the PyTorch repo which gives some insight into how this is done (see final code block or link below)</p>
<p><a href="https://github.com/pytorch/vision/blob/master/test/test_transforms.py" rel="nofollow noreferrer">https://github.com/pytorch/vision/blob/master/test/test_transforms.py</a></p>
<p>I am struggling to work out how I apply something like this myself.</p>
<p>Currently I have transforms along the lines of:</p>
<pre><code> transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(np.array([125.3, 123.0, 113.9]) / 255.0,
np.array([63.0, 62.1, 66.7]) / 255.0),
])
</code></pre>
<p>The documents say they way to use LinearTransformation is as follows:</p>
<pre><code>torchvision.transforms.LinearTransformation(transformation_matrix, mean_vector)
</code></pre>
<blockquote>
<p>whitening transformation: Suppose X is a column vector zero-centered
data. Then compute the data covariance matrix [D x D] with
torch.mm(X.t(), X), perform SVD on this matrix and pass it as
transformation_matrix.</p>
</blockquote>
<p>I can see from the tests I linked above and copied below that they are using torch.mm to calculate what they call a principal_components:</p>
<pre><code>def test_linear_transformation(self):
num_samples = 1000
x = torch.randn(num_samples, 3, 10, 10)
flat_x = x.view(x.size(0), x.size(1) * x.size(2) * x.size(3))
# compute principal components
sigma = torch.mm(flat_x.t(), flat_x) / flat_x.size(0)
u, s, _ = np.linalg.svd(sigma.numpy())
zca_epsilon = 1e-10 # avoid division by 0
d = torch.Tensor(np.diag(1. / np.sqrt(s + zca_epsilon)))
u = torch.Tensor(u)
principal_components = torch.mm(torch.mm(u, d), u.t())
mean_vector = (torch.sum(flat_x, dim=0) / flat_x.size(0))
# initialize whitening matrix
whitening = transforms.LinearTransformation(principal_components, mean_vector)
# estimate covariance and mean using weak law of large number
num_features = flat_x.size(1)
cov = 0.0
mean = 0.0
for i in x:
xwhite = whitening(i)
xwhite = xwhite.view(1, -1).numpy()
cov += np.dot(xwhite, xwhite.T) / num_features
mean += np.sum(xwhite) / num_features
# if rtol for std = 1e-3 then rtol for cov = 2e-3 as std**2 = cov
assert np.allclose(cov / num_samples, np.identity(1), rtol=2e-3), "cov not close to 1"
assert np.allclose(mean / num_samples, 0, rtol=1e-3), "mean not close to 0"
# Checking if LinearTransformation can be printed as string
whitening.__repr__()
</code></pre>
<p>How do I apply something like this? do I use it where I define my transforms or apply it in my training loop where I am iterating over my training loop? </p>
<p>Thanks in advance</p> | 2019-08-29 12:00:03.627000+00:00 | 2019-09-01 05:06:30.843000+00:00 | null | pytorch | ['https://stackoverflow.com/questions/31528800/how-to-implement-zca-whitening-python', 'https://arxiv.org/pdf/1512.00809.pdf'] | 2 |
49,254,719 | <p>There's a variation of k-means which is called k-modes, Published here</p>
<p><a href="http://www.cs.ust.hk/~qyang/Teaching/537/Papers/huang98extensions.pdf" rel="nofollow noreferrer">http://www.cs.ust.hk/~qyang/Teaching/537/Papers/huang98extensions.pdf</a></p>
<p>This is suitable for categorical data. </p>
<p>Please note here that the solutions you get are sensitive to initial conditions, as discussed here </p>
<p><a href="https://arxiv.org/ftp/cs/papers/0603/0603120.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/cs/papers/0603/0603120.pdf</a></p>
<p>see this for pythonic implementation</p>
<p><a href="http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html" rel="nofollow noreferrer">http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html</a> </p> | 2018-03-13 11:12:38.213000+00:00 | 2018-03-13 11:12:38.213000+00:00 | null | null | 49,254,559 | <p>I was wondering if there was a way to find patterns in a pandas DataFrame based on categories.</p>
<p>I know kmeans works for numeric values but my dataframe mainly consists of categories and dates:</p>
<pre><code>car check jobcard date season
merc A 12A 01-01-2010 Winter
bmw B 45A 03-02-2010 Winter
merc A 12D 10-01-2010 Winter
bmw C 25C 01-05-2010 Spring
vw A 62B 01-08-2010 Summer
etc
</code></pre>
<p>It goes on for about 5000 rows, the dataset represents different types of checks that required repairs after the check, and I would like to see a pattern, for instance a BMW has problems mainly in the summer, or the 12A jobcard never happens in the winter. I have already made some scatterplots, but I was not able to get any results from them: Scatterplot</p>
<p><a href="https://i.stack.imgur.com/JTn56.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JTn56.png" alt="enter image description here"></a></p>
<p>Is there any package that can provide a better overview or that can cluster categories in the same way kmeans does with numeric values?</p> | 2018-03-13 11:04:58.170000+00:00 | 2018-03-13 11:21:12.583000+00:00 | 2018-03-13 11:10:05.233000+00:00 | python|pandas|cluster-analysis|categories | ['http://www.cs.ust.hk/~qyang/Teaching/537/Papers/huang98extensions.pdf', 'https://arxiv.org/ftp/cs/papers/0603/0603120.pdf', 'http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html'] | 3 |
42,003,555 | <p>Sure Neural networks have been used on Text like in <a href="https://arxiv.org/pdf/1609.08144v2.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1609.08144v2.pdf</a>. You only want to output classes and not sentences so you have an easier time then they have. To combine the classifier you could use a weighted rank sum on the outputs.</p>
<p>How much the classifier improves sounds very interesting to me and could be the basis for a publication. </p> | 2017-02-02 13:32:23.917000+00:00 | 2017-02-02 13:32:23.917000+00:00 | null | null | 42,003,047 | <p>I am going to make a classifier that can categorize images. I know that I should use convolutional neural network for this. The thing is that for every image I have a discription. Is there any way that I can use this description to improve the classifier?</p> | 2017-02-02 13:06:37.950000+00:00 | 2017-02-02 16:49:42.377000+00:00 | null | image|neural-network|deep-learning|conv-neural-network | ['https://arxiv.org/pdf/1609.08144v2.pdf'] | 1 |
52,513,545 | <p>I don't know if the question is still relevant to you or not, but another solution to the question is to invert the coefficient matrix and then multiply the inverted matrix to the vector b. There are many matrix inversion algorithm. One such algorithm can be found in the following paper</p>
<p><a href="https://arxiv.org/pdf/1801.04723.pdf" rel="nofollow noreferrer">SPIN: A Fast and Scalable Matrix Inversion Method in Apache Spark</a></p>
<p>You can find the complete code on <a href="https://github.com/chandan-misra/Spark-Matrix-Inverse/blob/master/StrassenInverse.java" rel="nofollow noreferrer">GitHub link</a> also.</p>
<p>Cheers!!</p> | 2018-09-26 08:44:35.930000+00:00 | 2018-09-26 08:44:35.930000+00:00 | null | null | 39,888,170 | <p>I have a system of linear equations in the form of <code>Ax = b</code> to solve in Spark.</p>
<p><code>A</code> is <code>n by n</code></p>
<p><code>b</code> is <code>n by 1</code></p>
<p>I represent<code>A</code> in the form of <code>IndexedRowMatrix</code> or <code>RowMatrix</code> and <code>b</code> in the form of <code>DenseMatrix</code> or <code>DenseVector</code>.</p>
<p>How can I solve this system to calculate the <code>x</code> vector?</p>
<p>If the suggested solution is <a href="https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/linalg/CholeskyDecomposition.scala" rel="nofollow">Cholesky Decomposition</a>, would you please guide me through doing it as it is not part of the public API ? For example if the original matrix <code>A</code> is:</p>
<pre><code>1,2,3,4
2,1,5,6
3,5,1,7
4,6,7,1
</code></pre>
<p>and <code>b</code> is:</p>
<pre><code>5,6,7,8
</code></pre>
<p>What is passed as argument to the <code>solve</code> method ?</p>
<p>Any other solution other than inversing <code>A</code> would be very helpful.</p> | 2016-10-06 05:28:43.183000+00:00 | 2018-09-26 08:44:35.930000+00:00 | 2016-10-06 05:38:09.580000+00:00 | scala|matrix|apache-spark | ['https://arxiv.org/pdf/1801.04723.pdf', 'https://github.com/chandan-misra/Spark-Matrix-Inverse/blob/master/StrassenInverse.java'] | 2 |
57,240,678 | <p>The task you are referring to is called <strong>paraphrasing</strong>.</p>
<p>There is a lot of research on the field. In <a href="https://arxiv.org/search/?query=paraphrase%20generation&searchtype=title&abstracts=show&order=-announced_date_first&size=50" rel="nofollow noreferrer">arXiv</a> you fill find research papers on the topic. However, since you are asking for an API, I am assuming you don't want to implement these models by your self. Luckily, some authors have <a href="https://github.com/topics/paraphrase-generation" rel="nofollow noreferrer">published their models online on GitHub</a>. (Note: some are a re-implementation by someone else.)</p>
<p>When you use some of these implementations, note that most offer a <strong>pre-trained model</strong>. Do read which data set was used for training and try to pick the one that is the most similar to the data that you are facing. By doing so, more words in the domain of your descriptions will be available and more synonyms can be used.</p> | 2019-07-28 11:58:12.057000+00:00 | 2019-07-28 11:58:12.057000+00:00 | null | null | 57,222,043 | <p>i want to copy a data from a website which sells courses like ITIL, Prince2 and PMP and many other IT sector courses now there are 20,000 different courses's description is there.</p>
<p>However, i want to use selenium to scrape all the data but description is still subject to copyright.</p>
<p>Kindly let me know how i can manipulate all of that description to data to same meaning but different words.</p>
<p>Is there any API which can give me an access to build an code which will be helping these description data by using it's synonymous or which can change it's grammer to completely new sentennces but same meaning.</p>
<p>Kindly let me know where to start this.</p>
<p>Thanks,</p> | 2019-07-26 14:24:34.480000+00:00 | 2019-07-28 11:58:12.057000+00:00 | null | selenium|artificial-intelligence | ['https://arxiv.org/search/?query=paraphrase%20generation&searchtype=title&abstracts=show&order=-announced_date_first&size=50', 'https://github.com/topics/paraphrase-generation'] | 2 |
69,243,223 | <p>In the deep learning world, ReLU is usually prefered over other activation functions, because it overcomes the <strong>vanishing gradient problem</strong>, allowing models to learn faster and perform better. But it could have downsides.</p>
<p><strong>Dying ReLU problem</strong></p>
<p>The dying ReLU problem refers to the scenario when a large number of ReLU neurons only output values of 0. When most of these neurons return output zero, the gradients fail to flow during backpropagation and the weights do not get updated. Ultimately a large part of the network becomes inactive and it is unable to learn further.</p>
<p><strong>What causes the Dying ReLU problem?</strong></p>
<ul>
<li>High learning rate: If learning rate is set too high, there is a significant chance that new weights will be in negative value range.</li>
<li>Large negative bias: Large negative bias term can indeed cause the inputs to the ReLU activation to become negative.</li>
</ul>
<p><strong>How to solve the Dying ReLU problem?</strong></p>
<ul>
<li><p>Use of a smaller learning rate: It can be a good idea to decrease the learning rate during the training.</p>
</li>
<li><p>Variations of ReLU: <strong>Leaky ReLU</strong> is a common effective method to solve a dying ReLU problem, and it does so by adding a slight slope in the negative range. There are other variations like PReLU, ELU, GELU. If you want to dig deeper check out this <a href="https://himanshuxd.medium.com/activation-functions-sigmoid-relu-leaky-relu-and-softmax-basics-for-neural-networks-and-deep-8d9c70eed91e" rel="nofollow noreferrer">link</a>.</p>
</li>
<li><p>Modification of initialization procedure: It has been demonstrated that the use of a randomized asymmetric initialization can help prevent the dying ReLU problem. Do check out the <a href="https://arxiv.org/abs/1903.06733" rel="nofollow noreferrer">arXiv paper</a> for the mathematical details</p>
</li>
</ul>
<p>Sources:</p>
<p><a href="https://medium.com/@danqing/a-practical-guide-to-relu-b83ca804f1f7" rel="nofollow noreferrer">Practical guide for ReLU</a></p>
<p><a href="https://himanshuxd.medium.com/activation-functions-sigmoid-relu-leaky-relu-and-softmax-basics-for-neural-networks-and-deep-8d9c70eed91e" rel="nofollow noreferrer">ReLU variants</a></p>
<p><a href="https://towardsdatascience.com/the-dying-relu-problem-clearly-explained-42d0c54e0d24#:%7E:text=The%20dying%20ReLU%20problem%20refers,only%20output%20values%20of%200.&text=As%20long%20as%20NOT%20all,the%20network%20can%20continue%20learning." rel="nofollow noreferrer">Dying ReLU problem</a></p> | 2021-09-19 12:21:06.520000+00:00 | 2021-09-19 12:21:06.520000+00:00 | null | null | 69,240,517 | <p>I am using pytorch and autograd to build my neural network architecture. It is a small 3 layered network with a sinngle input and output. Suppose I have to predict some output function based on some initial conditions and I am using a custom loss function.</p>
<p>The problem I am facing is:</p>
<ol>
<li><p>My loss converges initially but gradients vanish eventually.</p>
</li>
<li><p>I have tried sigmoid activation and tanh. tanh gives slightly better results in terms of loss convergence.</p>
</li>
<li><p>I tried using ReLU but since I don't have much weights in my neural network, the weights become dead and it doesn't give good results.</p>
</li>
</ol>
<p>Is there any other activation function apart from sigmoid and tanh that handles the problem of vanishing gradients well enough for small sized neural networks?
Any suggestions on what else can I try</p> | 2021-09-19 05:20:49.183000+00:00 | 2021-09-19 12:21:06.520000+00:00 | 2021-09-19 11:44:34.683000+00:00 | python|deep-learning|pytorch|activation-function | ['https://himanshuxd.medium.com/activation-functions-sigmoid-relu-leaky-relu-and-softmax-basics-for-neural-networks-and-deep-8d9c70eed91e', 'https://arxiv.org/abs/1903.06733', 'https://medium.com/@danqing/a-practical-guide-to-relu-b83ca804f1f7', 'https://himanshuxd.medium.com/activation-functions-sigmoid-relu-leaky-relu-and-softmax-basics-for-neural-networks-and-deep-8d9c70eed91e', 'https://towardsdatascience.com/the-dying-relu-problem-clearly-explained-42d0c54e0d24#:%7E:text=The%20dying%20ReLU%20problem%20refers,only%20output%20values%20of%200.&text=As%20long%20as%20NOT%20all,the%20network%20can%20continue%20learning.'] | 5 |
41,192,025 | <p>In <em>The Art of Computer Programming</em>, p. 183 (Section 3.5.1), Donald Knuth has the following table of lower and upper bounds on the minimum numbers of comparisons:</p>
<p><a href="https://i.stack.imgur.com/Nt2Rj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nt2Rj.png" alt="Donald Knuth, TAOCP, p. 183, Section 3.5.1"></a></p>
<p>The <code>ceil(ln n!)</code> is the "information theoretic" lower bound, whereas <code>B(n)</code> is the maximum number of comparisons in an insertion binary sort. Since the lower and upper bounds are equal for <code>n=4</code>, 5 comparisons are needed.</p>
<p>The information theoretic bound is derived by recognizing that there are <code>n!</code> possible orderings of <code>n</code> unique items. We distinguish these cases by asking <code>S</code> yes-no questions in the form of <code>is X<Y?</code>. These questions form a tree which has at most <code>2^S</code> tips. We need <code>n!<=2^S</code>; solving for <code>S</code> gives <code>ceil(lg(n!))</code>.</p>
<p>Incidentally, you can use <a href="https://en.wikipedia.org/wiki/Stirling's_approximation" rel="nofollow noreferrer">Stirling's approximation</a> to show that this implies that sorting requires <code>O(n log n)</code> time.</p>
<p>The rest of the section goes on to describe a number of approaches to creating these bounds and studying this question, though work is on-going (see, for instance <a href="https://arxiv.org/abs/1108.0866" rel="nofollow noreferrer">Peczarski (2011)</a>).</p> | 2016-12-16 20:31:03.717000+00:00 | 2016-12-17 19:40:05.380000+00:00 | 2016-12-17 19:40:05.380000+00:00 | null | 41,191,541 | <p>This is an interview question. Say you have an array of four ints named A, and also this function:</p>
<pre><code>int check(int x, int y){
if (x<=y) return 1;
return 0;
}
</code></pre>
<p>Now, you want to create a function that will sort A, and you can use only the function <code>check</code>for comparisons. How many calls for <code>check</code> do you need?<br>
(It is ok to return a new array for result). </p>
<p>I found that I can do this in 5 calls. Is it possible to do it with less calls (on worst case)? </p>
<p>This is how I thought of doing it (pseudo code): </p>
<pre><code>int[4] B=new int[4];
/*
The idea: put minimum values in even cells and maximum values in odd cells using check.
Then swap (if needed) between minimum values and also between maximum values.
And finally, swap the second element (max of minimums)
and the third element (min of maximums) if needed.
*/
if (check(A[0],A[1])==1){ //A[0]<=A[1]
B[0]=A[0];
B[2]=A[1];
}
else{
B[0]=A[1];
B[2]=A[0];
}
if (check(A[2],A[3])==1){ //A[2]<=A[3]
B[1]=A[2];
B[3]=A[3];
}
else{
B[1]=A[3];
B[3]=A[2];
}
if (check(B[0],B[1])==0){ //B[0]>B[1]
swap(B[0],B[1]);
}
if (check(B[2],B[3])==0){ //B[2]>B[3]
swap(B[2],B[3]);
}
if (check(B[1],B[2])==0){ // B[1]>B[2]
swap(B[1],B[2]);
}
</code></pre> | 2016-12-16 19:49:57.603000+00:00 | 2016-12-17 19:40:05.380000+00:00 | null | arrays|algorithm|sorting | ['https://i.stack.imgur.com/Nt2Rj.png', "https://en.wikipedia.org/wiki/Stirling's_approximation", 'https://arxiv.org/abs/1108.0866'] | 3 |
46,199,650 | <p>The NVIDIA GRID K520 has 8GB of memory (<a href="http://www.nvidia.com/object/cloud-gaming-gpu-boards.html" rel="nofollow noreferrer">link</a>). I have successfully trained ResNet models on a NVIDIA GPU with 12GB of memory. As the error suggests, TensorFlow attempts to put all of the networks weights in the GPU memory and fails. I believe you have a few options:</p>
<ul>
<li>Train only on the CPU, as mentioned in the comments, assuming your CPU has more than 8GB of memory. This is not recommended.</li>
<li>Train a different network with fewer parameters. Several networks have been released since Resnet, such as <a href="https://arxiv.org/abs/1602.07261" rel="nofollow noreferrer">Inception-v4, Inception-ResNet</a>, with fewer parameters and comparable accuracy. This option costs nothing to try!</li>
<li>Buy a GPU with more memory. Easiest option if you have the money.</li>
<li>Buy another GPU with the same memory and train the bottom half of the network on one, and the top half of the network on the other. The difficulty in communicating between the GPUs makes this option less desirable.</li>
</ul>
<p>I hope this helps you and others that run into similar memory issues.</p> | 2017-09-13 14:02:50.843000+00:00 | 2017-09-13 14:02:50.843000+00:00 | null | null | 39,862,907 | <p>I am running <a href="https://github.com/tensorflow/models/tree/master/research/resnet" rel="nofollow noreferrer">the resenet model</a> on <code>EC2 g2(NVIDIA GRID K520</code>) instance and seeing a <code>OOM</code> Error. I have tried various combinations of removing the code that uses a <code>GPU</code>, prefixing <code>CUDA_VISIBLE_DEVICES='0'</code> and also reducing the <code>batch_size</code> to 64. I am still failing to start the training. Can you help? </p>
<pre><code>W tensorflow/core/common_runtime/bfc_allocator.cc:270] **********************x***************************************************************************xx
W tensorflow/core/common_runtime/bfc_allocator.cc:271] Ran out of memory trying to allocate 196.00MiB. See logs for memory state.
W tensorflow/core/framework/op_kernel.cc:936] Resource exhausted: OOM when allocating tensor with shape[64,16,224,224]
E tensorflow/core/client/tensor_c_api.cc:485] OOM when allocating tensor with shape[64,16,224,224]
[[Node: unit_1_2/sub1/conv1/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/gpu:0"](unit_1_2/residual_only_activation/leaky_relu, unit_1_2/sub1/conv1/DW/read)]]
[[Node: train_step/update/_1561 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_10115_train_step/update", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Traceback (most recent call last):
File "./resnet_main.py", line 203, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "./resnet_main.py", line 197, in main
train(hps)
File "./resnet_main.py", line 82, in train
feed_dict={model.lrn_rate: lrn_rate})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 382, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 655, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 723, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 743, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.ResourceExhaustedError: OOM when allocating tensor with shape[64,16,224,224]
[[Node: unit_1_2/sub1/conv1/Conv2D = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/gpu:0"](unit_1_2/residual_only_activation/leaky_relu, unit_1_2/sub1/conv1/DW/read)]]
[[Node: train_step/update/_1561 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_10115_train_step/update", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op u'unit_1_2/sub1/conv1/Conv2D', defined at:
File "./resnet_main.py", line 203, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "./resnet_main.py", line 197, in main
train(hps)
File "./resnet_main.py", line 64, in train
model.build_graph()
File "/home/ubuntu/indu/tf-benchmark/resnet/resnet_model.py", line 59, in build_graph
self._build_model()
File "/home/ubuntu/indu/tf-benchmark/resnet/resnet_model.py", line 94, in _build_model
x = res_func(x, filters[1], filters[1], self._stride_arr(1), False)
File "/home/ubuntu/indu/tf-benchmark/resnet/resnet_model.py", line 208, in _residual
x = self._conv('conv1', x, 3, in_filter, out_filter, stride)
File "/home/ubuntu/indu/tf-benchmark/resnet/resnet_model.py", line 279, in _conv
return tf.nn.conv2d(x, kernel, strides, padding='SAME')
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 394, in conv2d
data_format=data_format, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 703, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2310, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1232, in __init__
self._traceback = _extract_stack()
</code></pre> | 2016-10-04 22:45:59.840000+00:00 | 2020-01-09 07:43:38.113000+00:00 | 2020-01-09 07:43:38.113000+00:00 | tensorflow | ['http://www.nvidia.com/object/cloud-gaming-gpu-boards.html', 'https://arxiv.org/abs/1602.07261'] | 2 |
56,844,197 | <p>The question in the title is different that the question in the body :) I'll answer both: </p>
<p><em>Title question: <strong>"Does distributed training produce NN that is average of NNs trained within each distributed node?"</em></strong></p>
<p>No. In the context of model training with minibatch SGD, distributed training usually refers to data-parallel distributed training, which distributes the computation of the gradients of a mini-batch of records over N worker, and then produces an average gradient used to update central model weights, in async or sync fashion. Historically, the averaging happened in a separate process called the parameter server (historical default in MXNet and TensorFlow), but modern approaches use a more network-frugal, peer-to-peer ring-style all-reduce, democratized by <a href="https://eng.uber.com/horovod/" rel="nofollow noreferrer">Uber's Horovod extension</a>, initially developed for TensorFlow but <a href="https://github.com/horovod/horovod" rel="nofollow noreferrer">now available for Keras, PyTorch and MXNet too</a>. Note that model-parallel distributed training (having different piece of a model hosted in different devices) also exists, but data parallel training is more common in practice, possibly because simpler to implement (distributing an average is easy) and because full models often fit comfortably in memory of modern hardware. However, model parallel training is occasionally seen for very large models, such as <a href="https://arxiv.org/pdf/1609.08144.pdf" rel="nofollow noreferrer">Google's GNMT</a>.</p>
<p><em>Body question: <strong>"How do you produce an average of two or more neural network weights using any mainstream technology?"</em></strong></p>
<p>This depends on each framework API, for example:</p>
<p>In TensorFlow:
<a href="https://stackoverflow.com/questions/50373678/tensorflow-averaging-model-weights-from-restored-models">Tensorflow - Averaging model weights from restored models</a></p>
<p>In PyTorch:
<a href="https://stackoverflow.com/questions/48560227/how-to-take-the-average-of-the-weights-of-two-networks">How to take the average of the weights of two networks?</a></p>
<p>In MXNet (dummy code assuming initialized <code>gluon</code> <code>nn.Sequential()</code> models with similar architecture):</p>
<pre><code># create Parameter dict storing model parameters
p1 = net1.collect_params()
p2 = net2.collect_params()
p3 = net3.collect_params()
for k1, k2, k3 in zip(p1, p2, p3):
p3[k3].set_data(0.5*(p1[k1].data() + p2[k2].data()))
</code></pre> | 2019-07-02 01:24:16.157000+00:00 | 2019-07-02 01:56:37.783000+00:00 | 2019-07-02 01:56:37.783000+00:00 | null | 56,824,036 | <p>I'm currently sifting through a ton of material on distributed training for neural networks (training with backward propagation). And more I dig in to this material the more it appears to me that essentially every distributed neural neural network training algorithm is just a way to combine gradients produced by distributed nodes (typically done using average) with respect to constraints on execution environment (i.e. network topology, node performance equality, ...).</p>
<p>And all the the salt of underlying algorithms is concentrated around exploitation of assumptions on execution environment constraints with aim to reduce the overall lag and thus overall amount of time necessary to complete the training.</p>
<p>So if we're just combining gradients with distributed training using averaging of weights in some clever way then the whole process training is (more or less) equivalent to averaging of networks resulted by training within every distributed node.</p>
<p>If I'm right with things described above then I would like to try combining weights produced by distributed nodes by hand.</p>
<p><strong>So my question is:</strong>
How do you produce an average of two or more neural network weights using any mainstream technology such as tensorflow / caffe / mxnet / ... </p>
<p>Thank you in advance</p>
<p><strong>EDIT @Matias Valdenegro</strong></p>
<p>Matias I understand what you are saying: You mean that as soon as you apply the gradient new gradient will change and thus it is not possible to do the parallelization because old gradients has no relation to new updated weights. So real world algorithms evaluate gradients, average them and then apply them.</p>
<p>Now if you just expand parenthesis in this mathematical operation then you would notice that you can apply the gradients locally. Essentially there's no difference if you average the deltas (vectors) or averaging NN states (points). Please refer to diagram below:</p>
<p><a href="https://i.stack.imgur.com/8iD1Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8iD1Y.png" alt="enter image description here"></a></p>
<p>Suppose that NN weights are a 2-D vector.</p>
<pre><code>Initial state = (0, 0)
Deltas 1 = (1, 1)
Deltas 2 = (1,-1)
-----------------------
Average deltas = (1, 1) * 0.5 + (1, -1) * 0.5 = (1, 0)
NN State = (0, 0) - (1, 0) = (-1, 0)
</code></pre>
<p>Now the same result can be achieved if gradients were applied locally on a node and the central node would average the weights instead of deltas:</p>
<pre><code>--------- Central node 0 ---------
Initial state = (0, 0)
----------------------------------
------------- Node 1 -------------
Deltas 1 = (1, 1)
State 1 = (0, 0) - (1, 1) = (-1, -1)
----------------------------------
------------- Node 2 -------------
Deltas 2 = (1,-1)
State 2 = (0, 0) - (1, -1) = (-1, 1)
----------------------------------
--------- Central node 0 ---------
Average state = ((-1, -1) * 0.5 + (-1, 1) * 0.5) = (-1, 0)
----------------------------------
</code></pre>
<p>So the results are the same...</p> | 2019-06-30 10:16:17.497000+00:00 | 2019-07-02 01:56:37.783000+00:00 | 2019-06-30 15:32:29.580000+00:00 | python|tensorflow|machine-learning|neural-network|mxnet | ['https://eng.uber.com/horovod/', 'https://github.com/horovod/horovod', 'https://arxiv.org/pdf/1609.08144.pdf', 'https://stackoverflow.com/questions/50373678/tensorflow-averaging-model-weights-from-restored-models', 'https://stackoverflow.com/questions/48560227/how-to-take-the-average-of-the-weights-of-two-networks'] | 5 |
54,152,233 | <blockquote>
<p>ADAM or adaptive momentum works as follows:</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/3boio.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3boio.jpg" alt="Adam algorithm"></a></p>
<p>The velocity <code>v</code> accumulates the gradient Elements.</p>
<p>When you watch the Adam equations in this <a href="https://arxiv.org/pdf/1412.6980.pdf" rel="nofollow noreferrer">paper</a> you will see, that <strong>the stepsize has an upper bound at the learning rate. In the paper they call this characteristics of Adam: "its careful choice of stepsize"</strong> (discussed in Section 2.1 of the paper).
This is exactly what you observe here as "essential equal rates" during the first 5 steps, the rates in Adam are building up (accumulating) over multiple previous gradients, while the stepsize is being restricted to the learning rate itself. </p>
<p>On more Information on how the variable is calculated and updated in Tensorflow (see equations <a href="https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer" rel="nofollow noreferrer">here</a>).</p>
<p><em>Additional remarks on Adam:</em> </p>
<p>The larger <code>α</code> is relative to the learning rate, the more previous gradients affect the current direction.</p>
<p>In sgd, the size of the step was simply the norm of the gradient multiplied by the learning rate. </p>
<p>In Adam, the size of the step depends on how large and how
aligned a sequence of gradients are. The step size is largest when many successive
gradients point in exactly the same direction. If the momentum algorithm always
observes Gradient <code>g</code>, then it will eventually accelerate in the direction of <code>−g</code>.</p>
<p>This is from the Deep Learning Book by Ian Goodfellow, in more Detail you can read about Adam <a href="http://www.deeplearningbook.org/contents/optimization.html" rel="nofollow noreferrer">here</a>.</p> | 2019-01-11 18:37:15.940000+00:00 | 2019-01-11 19:52:50.717000+00:00 | 2019-01-11 19:52:50.717000+00:00 | null | 54,151,981 | <p>It's possible that I just misunderstood how Adam works, but why is this happening:</p>
<pre><code>x = tf.Variable([0.0, 0.0]) # variable
y = tf.constant([5.0, 1.0]) # target
cost = tf.abs(x-y)**2
</code></pre>
<p>As the first dimension of <code>y</code> is larger than the second, the gradient in the first dimension is larger than the second (as it should be) and each dimension of <code>x</code> approaches its target value at its own rate:</p>
<pre><code>sgd = tf.train.GradientDescentOptimizer(0.001)
train = sgd.minimize(cost)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for _ in range(5):
sess.run(train)
grad,variable = sess.run(opt.compute_gradients(cost))[0]
print(grad,variable)
#[-9.98 -1.996] [0.01 0.002]
#[-9.96004 -1.992008] [0.01998 0.003996]
#[-9.94012 -1.988024] [0.02994004 0.00598801]
#[-9.920239 -1.9840479] [0.03988016 0.00797603]
#[-9.900399 -1.9800799] [0.0498004 0.00996008]
</code></pre>
<p>Why are the rates essentially equal if we use Adam, even though the gradients have quite different values?</p>
<pre><code>adam = tf.train.AdamOptimizer(0.001)
train = adam.minimize(cost)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for _ in range(5):
sess.run(train)
grad,variable = sess.run(opt.compute_gradients(cost))[0]
print(grad,variable)
#[-9.998 -1.998] [0.001 0.001]
#[-9.996 -1.996] [0.00199999 0.00199997]
#[-9.994 -1.9940002] [0.00299997 0.00299989]
#[-9.992001 -1.9920005] [0.00399994 0.00399976]
#[-9.99 -1.990001] [0.0049999 0.00499955]
</code></pre> | 2019-01-11 18:16:35.413000+00:00 | 2019-01-11 19:52:50.717000+00:00 | 2019-01-11 19:24:10.933000+00:00 | python|tensorflow|machine-learning|optimization|deep-learning | ['https://i.stack.imgur.com/3boio.jpg', 'https://arxiv.org/pdf/1412.6980.pdf', 'https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer', 'http://www.deeplearningbook.org/contents/optimization.html'] | 4 |
66,800,694 | <p>Here are some ways to achieve your (1) and get it right.</p>
<ol>
<li><p>There are ways to do this by fitting to the quantization tables. Sherloq - for example - does this:</p>
<p><a href="https://github.com/GuidoBartoli/sherloq" rel="nofollow noreferrer">https://github.com/GuidoBartoli/sherloq</a></p>
<p>The relevant (python) code is at <a href="https://github.com/GuidoBartoli/sherloq/blob/master/gui/quality.py" rel="nofollow noreferrer">https://github.com/GuidoBartoli/sherloq/blob/master/gui/quality.py</a></p>
</li>
<li><p>There is another algorithm written up in <a href="https://arxiv.org/abs/1802.00992" rel="nofollow noreferrer">https://arxiv.org/abs/1802.00992</a> - you might consider contacting the author for any code etc.</p>
</li>
<li><p>You can also simulate <code>file_size(image_dimensions,quality_level)</code> and then invert that function/lookup table to get <code>quality_level(image_dimensions,file_size)</code>. Hey presto!</p>
</li>
<li><p>Finally, you can adopt a brute-force <a href="https://en.wikipedia.org/wiki/Error_level_analysis" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Error_level_analysis</a> approach by calculating the difference between the original image and recompressed versions each saved at a different quality level. The quality level of the original is roughly the one for which the difference is minimized. Seems to work reasonably well (but is linear in the for-loop..).</p>
</li>
</ol>
<p>Most often the quality factor used seems to be 75 or 95 which might help you to get to the result faster. Probably no-one would save a JPEG at 100. Probably no-one would usefully save it at < 60 either.</p>
<p>I can add other links for this as they become available - please put them in the comments.</p> | 2021-03-25 13:39:35.663000+00:00 | 2021-03-25 13:45:38.837000+00:00 | 2021-03-25 13:45:38.837000+00:00 | null | 2,024,947 | <p>This is really a two part question, since I don't fully understand how these things work just yet:</p>
<p>My situation: I'm writing a web app which lets the user upload an image. My app then resizes to something displayable (eg: 640x480-ish) and saves the file for use later.</p>
<p>My questions:</p>
<ol>
<li>Given an arbitrary JPEG file, is it possible to tell what the quality level is, so that I can use that same quality when saving the resized image?</li>
<li>Does this even matter?? Should I be saving all the images at a decent level (eg: 75-80), regardless of the original quality?</li>
</ol>
<p>I'm not so sure about this because, as I figure it: (let's take an extreme example), if someone had a 5 megapixel image saved at quality 0, it would be blocky as anything. Reducing the image size to 640x480, the blockiness would be smoothed out and <strike>barely</strike> less noticeable... until I saved it with quality 0 again...</p>
<p>On the other end of the spectrum, if there was an image which was 800x600 with q=0, resizing to 640x480 isn't going to change the fact that it looks like utter crap, so saving with q=80 would be redundant.</p>
<p>Am I even close?</p>
<p><em>I'm using GD2 library on PHP if that is of any use</em></p> | 2010-01-08 01:50:48.273000+00:00 | 2022-02-13 09:50:27.950000+00:00 | 2013-11-14 21:33:14.813000+00:00 | image-processing|jpeg|image-compression | ['https://github.com/GuidoBartoli/sherloq', 'https://github.com/GuidoBartoli/sherloq/blob/master/gui/quality.py', 'https://arxiv.org/abs/1802.00992', 'https://en.wikipedia.org/wiki/Error_level_analysis'] | 4 |
54,426,664 | <p>I guess what you're supposed to do is <a href="https://en.wikipedia.org/wiki/Image_segmentation" rel="nofollow noreferrer">image segmentation</a> and in the shape of the labels you got, the last dimension of 200 corresponds to 200 possible categories of pixels (that sounds like a lot to me, but without more context I cannot judge). The problem of image segmentation is <em>way</em> too broad to explain in an SO answer, but I suggest you check resources such as <a href="https://tuatini.me/practical-image-segmentation-with-unet/" rel="nofollow noreferrer">this tutorial</a> and check out the <a href="https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf" rel="nofollow noreferrer">influential</a> <a href="https://arxiv.org/pdf/1505.04597.pdf" rel="nofollow noreferrer">papers</a> in this field.</p> | 2019-01-29 17:36:32.717000+00:00 | 2019-01-29 17:36:32.717000+00:00 | null | null | 54,426,268 | <p>I have already learn about some classification using CNN like for Mnist. But recently I received a dataset which is consist of a vector set. The normal image dataset(mnist) is like nxcxwxh. The one I received is (w*h)x1xc. The goal is to train a network to classify these pixels(as I understand,is classification for pixels). The label length is groundtruth picture.</p>
<p>I am a little confused about this work. As I understand, for Image processing, we use the CNN with different receiption feld to make the convolution operations so that a feature can be obtained to represent the image. But in this case, the image is already expand to pixel-set. Why is the convolutional neural network still suitable?</p>
<p>Still I am not sure about the work but I started to try.I used the 1d convolution instead of 2d in the network. After 4-Conv1d,the output is connected to a softmax layer then fed to crossentropy loss function. It seems, I have some problems with the output dimensions so the network is not able to train.</p>
<p>I use pytorch to implement the work. Below is the network form I try to build. The dimensions do not match those need for crossentropyloss function. 122500 was set to be the sample numbers. So I think the convolution was performed along the 1-200 directions.</p>
<p>First I want to know,is that right to implement like this using conv1d when I want to classify the pixels?</p>
<p>If this thought was right, how can I continue to feed the features to the loss function?</p>
<p>If this is wrong,can I have some similar examples for this kind of work? I am new to python, so if there were some stupid mistakes, pls point out.</p>
<p>Thanks all.</p>
<pre><code>class network(nn.Module):
"""
Building network
"""
def __init__(self):
super(network, self).__init__()
self.conv1 = nn.Conv1d(in_channels = 1,out_channels = 32,stride = 1,kernel_size = 3)
self.conv2 = nn.Conv1d(in_channels = 32,out_channels = 64,stride = 1,kernel_size = 3)
self.conv3 = nn.Conv1d(in_channels = 64,out_channels = 128,stride = 1,kernel_size = 3)
self.conv4 = nn.Conv1d(in_channels = 128,out_channels = 256,stride = 1,kernel_size = 3)
self.fc = nn.Linear(13, 2)
def forward(self,s):
s = self.conv1(s)
s = F.relu(F.max_pool1d(s, 2))
s = self.conv2(s)
s = F.relu(F.max_pool1d(s, 2))
s = self.conv3(s)
s = F.relu(F.max_pool1d(s, 2))
s = self.conv4(s)
s = F.relu(F.max_pool1d(s, 2))
s = self.fc(s)
s = F.softmax(s,1)
output = model(input)
loss = loss_fn(output, labels)
</code></pre> | 2019-01-29 17:11:12.210000+00:00 | 2021-01-21 12:25:32.367000+00:00 | 2019-02-15 08:35:16.040000+00:00 | conv-neural-network|pixel|pytorch | ['https://en.wikipedia.org/wiki/Image_segmentation', 'https://tuatini.me/practical-image-segmentation-with-unet/', 'https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf', 'https://arxiv.org/pdf/1505.04597.pdf'] | 4 |
32,596,896 | <p>The difference between PC and PC Stable is a change to the Skeleton phase of the algorithm, implemented by the <code>skeleton()</code> function in <code>pcalg</code>. The <code>pcSelect()</code> function is only trying to learn the adjacency set of one particular variable, not the whole structure of the graph, so it does not call <code>skeleton()</code>. It looks like you're trying to copy the parts of <code>skeleton</code> that have the Stable modification directly into <code>pcSelect</code>. However, in <code>skeleton</code>, G is a matrix, and in <code>pcSelect</code>, G is a vector. That's giving you an error when you try to apply the <code>split()</code> function, which works on the matrix but not on the vector.</p>
<p>So what do you need to achieve the Stable modification in <code>pcSelect</code>? From the R documentation, it looks like <code>pcSelect</code> is an implementation of the PC-simple algorithm, published in Buehlmann, P., Kalisch, M. and Maathuis, M.H. (2010). 'Variable selection for high-dimensional linear models: partially faithful distributions and the PC-simple algorithm.' <em>Biometrika 97</em>, 261–278. The paper is on Arxiv here: <a href="http://arxiv.org/abs/0906.3204" rel="nofollow">http://arxiv.org/abs/0906.3204</a> . </p>
<p>In line 278 of the Arxiv version, the authors describe how edges are to be removed, by creating a new "active set" of regressors at each stage. As written, the conditioning sets in the correlation tests range over <em>all</em> variables in the active set, before the new active set is created. In other words, variables are not removed from the active set after each test; a group of them are removed at the end of each stage, so there is no dependence on the order of tests within each stage. This is equivalent to the Stable modification to PC. </p>
<p>If the <code>pcSelect</code> implementation follows its published description, then you needn't worry: it already has the Stable modification built in. The <a href="https://github.com/rforge/pcalg/blob/master/pkg/R/pcalg.R" rel="nofollow">source code of pcSelect</a> is a bit opaque, but I think that's what it's doing. So you don't need to change anything in <code>pcSelect</code>! </p> | 2015-09-15 22:45:06.293000+00:00 | 2015-09-15 22:45:06.293000+00:00 | null | null | 30,634,064 | <p>In R code,pcalg package. How to write stable method in pcSelect as similar as pc_stable? I just want to write stable method into pcSelect as follows:</p>
<pre><code>while (!done && any(G)) {
n.edgetests[ord+1] <- 0
done <- TRUE
ind <- which(G)
remainingEdgeTests <- length(ind)
if(verbose>=1)
cat("Order=",ord,"; remaining edges:",remainingEdgeTests,"\n", sep='')
if(method == "stable") {
#View(G)
## Order-independent version: Compute the adjacency sets for any vertex
## Then don't update when edges are deleted
G.l <- split(G, gl(p,p))
#View(G.l)
}
</code></pre>
<p>However, <code>G.l<-split(G,gl(p,p))</code> is not success in here, could you please help me ,thanks. </p> | 2015-06-04 02:41:16.187000+00:00 | 2015-09-15 22:45:06.293000+00:00 | 2015-06-04 03:38:08.767000+00:00 | r | ['http://arxiv.org/abs/0906.3204', 'https://github.com/rforge/pcalg/blob/master/pkg/R/pcalg.R'] | 2 |
55,379,217 | <p>Are you really sure that a CNN is what you want? It might not be the best choice for sequential data. If you want to learn more about alternative ways to deal with sequential data (like time series), I recommend reading the paper "<a href="https://arxiv.org/abs/1602.01711" rel="nofollow noreferrer">The Great Time Series Classification Bake Off</a>".</p>
<p>I especially recommend looking into <a href="https://en.wikipedia.org/wiki/Dynamic_time_warping" rel="nofollow noreferrer">Dynamic time warping</a> first. It might not be a bleeding edge deep learning algorithm but turns out to be very hard to beat when it comes to classifying sequential data.</p> | 2019-03-27 14:06:20.757000+00:00 | 2019-03-27 14:06:20.757000+00:00 | null | null | 55,378,909 | <p>Is there any way to transform sequential data to 2-dimensional data in order to use it for a common CNN?</p>
<p>My dataset Looks like: 14,40,84,120,38,29,395,58,153,...</p>
<p>But I need a 2-dimensional representation for that. Is there any established algorithm for that purpose?</p> | 2019-03-27 13:52:21.153000+00:00 | 2019-03-27 18:46:13.123000+00:00 | 2019-03-27 18:46:13.123000+00:00 | tensorflow|machine-learning|neural-network|conv-neural-network | ['https://arxiv.org/abs/1602.01711', 'https://en.wikipedia.org/wiki/Dynamic_time_warping'] | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.