text
stringlengths
1
41.2k
label
stringclasses
55 values
dataType
stringclasses
2 values
communityName
stringclasses
55 values
datetime
stringdate
2011-10-28 00:00:00
2025-04-25 00:00:00
username_encoded
stringlengths
136
160
url_encoded
stringlengths
220
332
The amazing story of Rugby League legend Roy Francis. Available on BBC Sounds from 7pm onwards tonight
r/rugbyleague
post
r/rugbyleague
2025-03-03
Z0FBQUFBQm9DbVhReE1lWGFPaU9XT2RQRGdlN1kzVGdrbU02WHZrdm5wNVBPcTZhVU8zcW1ka05fcnVWMklPcGFNajFHM1FEekVyNGNiV2d2YUtKaFowZTk0bWo4WXRqWUE9PQ==
Z0FBQUFBQm9DbVhRanRaRVVqT0I1WDl2X3B1YWdNdVQtdXcwVl9NSEN5S2FYNU13NjlPbVNCOXZ0RHBIYW04b3o0emRPZGFtNjAzc3ZVTVlPODZvZFVwWXVvRDdTdlVGRjJXRDBHdlFKc29FZ3cwdXBId0RHU3VzbEcwWUlIc0JpQ0tRYXQySEVrdXZMTW9wWFhOUmV1RWluOXhmWjIzcFdXUFJJS1E2RkFQaEc4cUdLOTVnUngxX0JaM0dGWG1nVjBCcUR0dUNkWXN4aEpiLTlaZUdYTnRRWDVfSVdGeVlDQT09
No response!!!
r/neuralnetworks
comment
r/neuralnetworks
2025-03-03
Z0FBQUFBQm9DbVhRQVV3YkpKU3JOZmhLTmZHTEJKN0FIVFd2Nmt6YXVpYUhsM3lodm1BbWZKVEI0dU1hOTgteVdYX3ZHSTJoTFFYSmVxaWRyRHlMQWNaQm9LYVNRWTZ2VlE9PQ==
Z0FBQUFBQm9DbVhROG56UVViQTBhbjBoRjdNMXYtNXFYS1BtT29tQ3RxcmtXbW91dWdNY1RFYnJTUTZqS2l1bXdRQVFfbWdGNDM5TnJ3TUVCM2FrRjRXZ2lrZHNVdWhPZUlYU2NqRS0yTmotd1czY09KeXhBS0NqaUVtZXJJenQwS2hyMWRaZEVheUlqZXNUUkZUV1d4YTdhN08zbk5oLXNyeGhHRmJsM09nVlRzZVJ5UGZIdXpHbW1Hd2YzYXVWcWpnWFhydTJGRC1mSWl5dk5rTG9ZWk1BVU1hRnEzWGlnZjNpb1JoZk9vbHdpUnRlbzVjYlFUST0=
I'm curious about what frameworks others use to decide whether to adopt a new technology and, if so, which one. Any preferred methods you recommend? Do you have a formal process, or do you do it based on vibes?
r/legaltech
post
r/legaltech
2025-03-03
Z0FBQUFBQm9DbVhRZG1CSThQTjA1Z0Etb3YzVDViOWdMck5iUjlpR2h0Wk5Tc3EzWEdmVXgyZDRMdmwyNlYyV0IwajhuSXI4WFdDSEFnc2RleVA2aDc4LWFxeFp0S3ZJZ2c9PQ==
Z0FBQUFBQm9DbVhRaVRUQmdwUEx1QVQ5OEZ6dHQ1Z0dMUm1PRXVtNEpEZE9aVXAyVWhZeDlTbER2Wmw3THZTY3AtM2FBT0ZoWHdiOG0tY2w0dVpwU0ktZXNDdUJZZkRIcHBrWmhSSlRkTGRDaGVCTExKbWRoQmtlWE9QbVdCSnN6eXlrMkZCc094NExnM3J6Y2J3MnQ1MGI3Q3Z3Z0JjU2tmOVNQWmVhQ1QwZGNaS3NtVWtmdFZzPQ==
Hi everyone, sharing a small project that I have been working on. Here's a description of the video: > In this demo, 200 entities called "blobs" are placed in a 128x128 environment. Each one is wired with a randomly generated neural network. At the end of each generation, all blobs on the left half are removed (highlighted in red), and the remaining blobs are used to repopulate the next generation. This demonstration shows that as the generations progress, blobs gain the tendency to move towards the right, since that is the best method of survival per generation. A sample blob is highlighted in blue in each generation, and its neural network data is displayed on the user interface. TL;DR: many small randomly-generated neural networks are placed in an environment, the left half is deleted after each generation. Watch how the population evolves to learn to move towards the right by itself. I would really appreciate any feedback, code review, or suggestions! You can find the source at: https://github.com/kostareg/evolution-rs
r/neuralnetworks
comment
r/neuralnetworks
2025-03-03
Z0FBQUFBQm9DbVhRZndaVkVEaElPWWt4bHpTck5QUnQwQ2lTX3F5OW1sd29tNEpJMlZodjItWW4xVDZpRU5BbXNVTmtUci1tekxPSjU3RFFYMmtJcVBiYlJ4N0RvbFFhYWc9PQ==
Z0FBQUFBQm9DbVhROFdvME9TUHhxNnloOURPNFhkejlCSzE2SzNNaTAwQ1NTSFFPSWM5UTR3N0R1V1M4dDNfa2tOQ1N0ay1xVDNxVUtLZHhMSENMZEM1OHhsc0t4U2FKS21rRURSSTF5OS1FRGQ3ZWdvOTZYMnJ0OWQyOUg4dUtTNVVfblVNR3VJNkYyZHNsUkZBbVNaMVB4V0YxR2kwQkZURzNPdnVicG9ReDV5RFliYWpFa1ZtSEFDMlNmemRicHRwSzZyb3VFMGJrRzRSN0NhTnU4cElJd0lWZDcwa29vUT09
Wanted to share my advice on breaking into legal tech, as I found this subreddit very helpful during my job search. tl;dr: graduated from law school a couple of years ago, did litigation for a bit, did a clerkship, then took a few months off to do a job search because I was interested in pivoting to tech. I do not have a technical background, although I was an RA for a law school professor who teaches tech law/policy. Caveat that, although I primarily applied for roles at legal tech companies and received a few offers, I actually ended up accepting an operations role at an AI startup that does not intersect with law (happy to share the company name if you DM me since they're still hiring but not trying to make this post about them!). My advice: \- I did around 75-80 calls during my job search, most of which I found immensely helpful to (1) get a better sense of the legal tech, and particularly the legal AI, landscape and (2) find job leads. On every call, I'd ask the person I was speaking with who they recommended talking to next, which was a great way to get outside my own network bubble. I also did a decent amount of cold emailing/LinkedIn messaging (and even some cold Reddit messaging), which had a surprisingly high response rate. \- If you graduated from law school relatively recently, I highly recommend scheduling coffee chats with a few professors to ask for their advice. Even though many of my professors didn't have a background in legal tech, they knew which of their former students did and were willing to connect me with them. Your school may also have a specific legal tech institution that's worth reaching out to (e.g., Berkman Klein Center at HLS, CodeX at Stanford, etc.). \- Start following tech / legal tech folks on Twitter and LinkedIn and learning about whatever tech space you want to break into (e.g., if you're interested in AI, I'd recommend Andrej Karpathy's 'Intro to Large Language Models' on YouTube and the famous 2023 'Attention Is All You Need' paper if you haven't read it yet). \- Make your resume less law and more business. Obviously, you've practiced law so that's going to be on your resume. But just changing the wording around how you describe the work that you've done can be helpful. I went through my resume and both (1) tried to make legal concepts more understandable for non-lawyers and (2) put in more business-type descriptors of my legal work such as 'project management' and 'client engagement.' \- Do your research. I only had a vague idea of what a product manager or customer success operator did before starting my search, so had a lot of catching up to do. I also found it useful to learn more about the different fundraising startups and the risk/reward associated with each stage. \- Be humble. If you're coming from Big Law, you're likely going to take a pay cut and it also may feel like there's a loss of prestige, at least to start. In order to join an incredible company, I ended up taking a role that many recent college grads are in and have zero regrets about it - but it was hard to get over the initial hurdle of feeling like I had to go backward in order go forward and it also made me realize that more of my identity/ego was tied up in being a lawyer than I had realized. \- What you lose in salary, you may makeup for in equity, particularly if you're applying to startups. This should go without saying for lawyers but you should ensure you know how to effectively negotiate both the equity amount and the terms around the equity (for example, the time window you have to exercise vested equity after leaving the company). I also found it helpful to run my offers and the companies I was considering joining past some VC friends. Hope at least some folks find these tips are useful - and sending good vibes to everyone undertaking a job search right now!!
r/legaltech
post
r/legaltech
2025-03-04
Z0FBQUFBQm9DbVhRWm5Pc1dseE9tZWYzV1F2cUR5eHBnb1p0Q3NvQm5KR2F3eDNzNktzd3d3bHNrVlhucEdFaVg5SHFxVzduTUJMRnk0YkRvSDhGX2ZXRjU0Y2NLWVpWQ0E9PQ==
Z0FBQUFBQm9DbVhRbDFXUHp0Mzg0RFJwNlp5QXdaY1NCdEw0LVdDRXYxOG13RmZRb1NRMHljTXY5NWpFWlFSbGgwUExNT3JXYUJGNmJ3clIwUGQ2SjQxTFRYcWNxWEJRd1VyOTl1WXRSaUpOWV82STdBNkZFVm1BVk05LTFteFhKWGJlS2lyRlRRRGVOcVdrVTBuUDBWUlAwbVhKX3pVaUIydVBqNDlTMkRRSjlKMXBOXzlTSGxRLUdaUnVaNEtqWDlxYjRYb3ZiaC1n
No
r/filecoin
comment
r/filecoin
2025-03-04
Z0FBQUFBQm9DbVhRWnV5Zm8xNTh5cnNtbXBMdWVORklKbDV0cnYyYVI0Y1JpUWVhak9Gd2ZyUkN2T1FyeUp2SFVYSnNwRnBZbl9nb1BadUtLZzk1bU5BUGo2MWJqNW1zOEZKR1hIcFVwbkZPV2hNcXFzemtqSXc9
Z0FBQUFBQm9DbVhRVTVyN0lfcTRsUk93N0hxTzJOZjRVNzhBcmxtZGlUQTVwRFJ0ZlBpeVZ0dDVnMzdpSGpybDQ3X0ZSVjlPRFRCNWZSTEQxVTlNWDRtUjlkT3lVZHA3LVVnZzd4QVNadG9PcVpWUnpzOWJCOE9tYWFaS2dWcjhHemJJZGU2MFBXUWV4b2N5NGF5WWVaMmI4aWtjRlo0LVFKcW5Mb25VQnJTU2l4TlhFTm5NQWEwPQ==
I'm very new to machine learning development, neural networks, recurrent neural networks, and don't have much experience with Python. Despite this, I am attempting to create a recurrent neural network that can train to figure out the next number in a consecutive number sequence. I have put together a basic draft of the code through some learning, tutorials, and various resources, but I keep running into an issue where the network will train and learn, but it will only get closer and closer to the first sample of data, not whatever the current sample of data is, leading to a very random spread of loss on the plot. **TL;DR RNN having issue of training toward only first dataset sample despite receiving new inputs** Here is the code (please help me with stupid Python errors as well): `import numpy as np` `import matplotlib.pyplot as plt` `import pandas as pd` `# Gather User Input Variables` `print("Input amount of epochs: ")` `epochs_AMNT = int(input())` `print("Input amount of layers: ")` `layers_AMNT = int(input())` `print("Input length of datasets: ")` `datasets_length = int(input())` `print("Input range of datasets: ")` `datasets_range = int(input())` `print("Input learning rate: ")` `rate_learn = float(input())` `# Gather Training Data` `def generate_sequence_data(sequence_length=10, num_sequences=1, dataset_range=50):` `X = []` `Y = []` `for _ in range(num_sequences):` `start = np.random.randint(0, dataset_range) # Random starting point for each sequence` `sequence = np.arange(start, start + sequence_length)` `X.append(sequence[:-1]) # All but last number as input` `Y.append(sequence[-1]) # Last number as the target` `# Convert lists to numpy arrays` `X = np.array(X)` `Y = np.array(Y)` `return X, Y` `print("Press enter to begin training...")` `input()` `# Necessary Functions for Training Loop` `def initialize_parameters(hidden_size, input_size, output_size):` `W_x = np.random.randn(hidden_size, input_size) * 0.01` `W_h = np.random.randn(hidden_size, hidden_size) * 0.01` `W_y = np.random.randn(output_size, hidden_size) * 0.01` `b_h = np.zeros((hidden_size,))` `b_y = np.zeros((output_size,))` `return W_x, W_h, W_y, b_h, b_y` `def forward_propogation(X, ih_weight, hh_weight, ho_weight, bias_hidden, bias_output, h0):` `T, input_size = X.shape` `hidden_size, _ = ih_weight.shape` `output_size, _ = ho_weight.shape` `hidden_states = np.zeros((T, hidden_size))` `outputs = np.zeros((T, output_size))` `curr_hs = h0 # Initialize hidden state` `for t in range(T):` `curr_hs = np.tanh(np.dot(ih_weight, X[t]) + np.dot(hh_weight, curr_hs.reshape(3,)) + bias_hidden) # Hidden state update` `curr_output = np.dot(ho_weight, curr_hs) + bias_output # Output calculation` `hidden_states[t] = curr_hs` `outputs[t] = curr_output` `return hidden_states, outputs` `def evaluate_loss(output_predict, output_true, delta=1.0):` `# Huber Loss Function` `error = output_true - output_predict` `small_error : bool = np.abs(error) <= delta` `squared_loss = 0.5 * error**2` `linear_loss = delta * (np.abs(error) - 0.5 * delta)` `return np.sum(np.where(small_error, squared_loss, linear_loss))` `def backward_propogation(X, Y, Y_pred, H, ih_weight, hh_weight, ho_weight, bias_hidden, bias_output, learning_rate):` `T, input_size = X.shape` `hidden_size, _ = ih_weight.shape` `output_size, _ = ho_weight.shape` `dW_x = np.zeros_like(ih_weight)` `dW_h = np.zeros_like(hh_weight)` `dW_y = np.zeros_like(ho_weight)` `db_h = np.zeros_like(bias_hidden)` `db_y = np.zeros_like(bias_output)` `dH_next = np.zeros((hidden_size,)) # Initialize next hidden state gradient` `for t in reversed(range(T)):` `dY = Y_pred[t] - Y[t] # Output error` `dW_y += np.outer(dY, H[t]) # Gradient for W_y` `db_y += dY # Gradient for b_y` `dH = np.dot(ho_weight.T, dY) + dH_next # Backprop into hidden state` `dH_raw = (1 - H[t] ** 2) * dH # tanh derivative` `dW_x += np.outer(dH_raw, X[t]) # Gradient for W_x` `dW_h += np.outer(dH_raw, H[t - 1] if t > 0 else np.zeros_like(H[t]))` `db_h += dH_raw` `dH_next = np.dot(hh_weight.T, dH_raw) # Propagate error backwards` `# Gradient descent step` `ih_weight -= learning_rate * dW_x` `hh_weight -= learning_rate * dW_h` `ho_weight -= learning_rate * dW_y` `bias_hidden -= learning_rate * db_h` `bias_output -= learning_rate * db_y` `return ih_weight, hh_weight, ho_weight, bias_hidden, bias_output` `def train(hidden_size, learning_rate, epochs):` `data_inputs, data_tests = generate_sequence_data(datasets_length, epochs, datasets_range)` `data_inputs = data_inputs.reshape((data_inputs.shape[0], 1, data_inputs.shape[1])) # Reshape for LSTM input (samples, timesteps, features)` `input_size = data_inputs.shape[1] * data_inputs.shape[2]` `output_size = data_tests.shape[0]` `ih_weight, hh_weight, ho_weight, bias_hidden, bias_output = initialize_parameters(hidden_size, input_size, output_size)` `hidden_states = np.zeros((hidden_size,))` `losses = []` `for epoch in range(epochs):` `loss_epoch = 0` `hidden_states, output_prediction = forward_propogation(data_inputs[epoch], ih_weight, hh_weight, ho_weight, bias_hidden, bias_output, hidden_states)` `loss_epoch += evaluate_loss(output_prediction, data_tests[epoch])` `ih_weight, hh_weight, ho_weight, bias_hidden, bias_output = backward_propogation(data_inputs[epoch], data_tests, output_prediction, hidden_states, ih_weight, hh_weight, ho_weight, bias_hidden, bias_output, learning_rate)` `losses.append(loss_epoch / data_inputs.shape[0])` `if (epoch % 1000 == 0):` `print("Epoch #" + str(epoch))` `print("Dataset: " + str(data_inputs[epoch]))` `print("Pred: " + str(output_prediction[0][-1]))` `print("True: " + str(data_tests[epoch]))` `print("Loss: " + str(losses[-1]))` `print("------------")` `return losses, ih_weight, hh_weight, ho_weight, bias_hidden, bias_output` `print("Started Training.")` `losses, ih_weight, hh_weight, ho_weight, bias_hidden, bias_output = train(layers_AMNT, rate_learn, epochs_AMNT)` `print("Training Finished.")` `# Plot loss curve` `plt.plot(losses)` `plt.xlabel("Epochs")` `plt.ylabel("Loss")` `plt.title("Training Loss Over Time")` `plt.show()`
r/neuralnetworks
post
r/neuralnetworks
2025-03-04
Z0FBQUFBQm9DbVhRU1d2ekhBcUVCT0FmaERGUE9iVlM1YXM3SGFCUGEtNW4zLXlyOGFpQ1pFOXVOeHhWVFdIbTM4NmFYVjA0MG5UREhBaFBmUi00X3Y5eGdDbWlyclhGRVE9PQ==
Z0FBQUFBQm9DbVhRczNHMTdxMFdreXp6U0s5UVAtWE5fZ0RISnZfS1RPc0VqY19VM0lyTmRUYVhhODZQRjdJLW9ra2UwcXJyaXB6bURFallPaHZrTm9TbGZ3MjZWTHp1NjBJaXg4R2Z6aG1WMk1UbS1PV0N2M0VmZzNCcjRaMmxJUXNEaGFsd3hDUlo2Qmd4VXM3enRqTXJMWFd5Y2hadTVUQlZlT3Jjc0k4X2FQcGgwVGt0UUhCRkJXWVJBcURrYXlSa3FRNFlKTHJW
When reading articles about Gemini 2.0 Flash doing much better than GPT-4o for PDF OCR, it was very surprising to me as 4o is a much larger model. At first, I just did a direct switch out of 4o for gemini in our code, but was getting really bad results. So I got curious why everyone else was saying it's great. After digging deeper and spending some time, I realized it all likely comes down to the image resolution and how chatgpt handles image inputs. I dig into the results in this medium article: [https://medium.com/@abasiri/why-openai-models-struggle-with-pdfs-and-why-gemini-fairs-much-better-ad7b75e2336d](https://medium.com/@abasiri/why-openai-models-struggle-with-pdfs-and-why-gemini-fairs-much-better-ad7b75e2336d)
r/legaltech
post
r/legaltech
2025-03-04
Z0FBQUFBQm9DbVhRTkVJMDdoZUFidzFzRmxUcVdLSFpBc19FWVhxUTFEMkFjVnF4N3VCV0ZHdWI0NXdJamZVcUoyaE12QjAxTThHX19kZ25LR2F1TUt0NVB4aVFVaW0wQ3c9PQ==
Z0FBQUFBQm9DbVhRUW1tRURDX0VVU3JWb3NEdGEtVnlFQUpJd2FQLWVNQWZNdnF2a0FSSElwOUdvckRGamo5VlBPSzNzVE15WHdnMVdQaU5SMUVGU2VmSS1yVnktendiVVpOMDRsTHl2Y2h1Qm1lSGJHRDJHYk85THh5UGs4X2FYMVp6WlhZWTIwWnM2TTA3ZUtQSnRXd3piclM0N3dVN1BFRDVHNXZiS1FWU3RCbFMyemktZzdFX3d1bTVDMUU2WW00WFRDNzAtZWQx
I am new to this subject so please be aware of that and my question is that does brain have universal representation of the world like converting the visual input from rods to neural code how this process works and how does it Store the relationship like motion blur etc I have some idea but can't fully grasp it if any one know about it please provide information and if any one have any idea for some kind of universal encoder or decoder which can work with any data type to convert into some from universal representation i have found that vector or embedding or hyper dimensions or great at fixed constant encoding but the brain doesn't work like that I need this part for my ai system
r/airesearch
post
r/airesearch
2025-03-04
Z0FBQUFBQm9DbVhRa19UdjRNSEprTkRVYlN2M21JRlJ4cVhEUl9CYS11N0lYNWtwNko1Z1FhZjRJT3pEeDl5cHRTeWhlRmlFaC1TTHR2bXVUc1p5MThOSkZ0RmNQdEhCT2NNWk12SmE1d1BKUWZOWlhCQlV6Xzg9
Z0FBQUFBQm9DbVhRbDJUX2piSEc1eUwwUnBaSlM5RDE2RDFha2UtN21OdnAxdTZTaENRN0szR3FrM1M3MUJNcGlKcEVTUEFiTGZCa0lvc04tTTIwaTNQLWVSMWhieDhLUm5Za0FiQjVOVVlLU3pmUVFtdkJuMklrc2ljOU5jN0xQdkJZRi1yQ3d5Qmp5blhjN0Z4cjhfRjE3ZS11RnRVQk9mdWNDU2U1TWJlMVJGTnREbF8xNXFIdFBtY3A5Si11YnkxOHk3YUJqVWhndTAteU92SWN4OGtzVjlSZjVjWTd5UT09
I have a few thousand redacted documents, primarily PDFs of emails, PowerPoint presentations, and other originally electronic formats. Since these were redacted digitally, I assume OCR processing shouldn't be an issue. I’m considering using a Python script (or something similar) to batch OCR all the documents. I have access to decent computing power—not a $50,000 AI workstation, but I do have multiple GPUs for local AI processing, and a Threadripper. It might take a while, but perhaps some fine-tuning with Ollama and DeepThink could help? I’m also thinking about setting up a local RAG system connected to postgre/mongoDB with the OCR'd documents, but I’m unsure if that’s the best approach. Some concerns: 1. Hallucinations & Accuracy: If I use an AI-powered approach, how do I ensure verifiable sources for extracted information? Something like a Perplexity/Claude by locally? Like a local NoteBookLM, I guess. 1. Search & Discovery: A chat-style UI could work, but the challenge is knowing *what* to ask—there’s always the risk of missing key details simply because I don't know what to look for. 1. Alternatives: Are there better ways to process and search these documents efficiently, outside of RAG? This isn’t *technically* legal research, but it functions similarly to legal discovery, so I assume the solutions would overlap. The accuracy bar is lower, and I’m willing to bear some costs, but nothing extravagant. I’d appreciate any suggestions! While I’m not formally a developer, I have strong technical knowledge and can implement advanced solutions if pointed in the right direction. **Edit:** I started looking into e-discovery software, but I'm noticing it's charged as a per/GB fee. I'm trying to avoid something like that due to costs. The average PDF is still a few MBs, and there's thousands of them. I know I posted this on legal tech, but this is more-so for journalism work instead of legal, so charging per GB wouldn't be something I'd be able to do affordably. Hence my prefernce on the bootleg local RAGs, etc.
r/legaltech
post
r/legaltech
2025-03-04
Z0FBQUFBQm9DbVhRTVJqYWJQa0JxeFcxbzNmb1VCcXVUaWFkelBHOEtybVliSThwRXg0UXd1bDVuTEt1ZmVvMkp4bXgtbk1hODVUTFRpWUMxWTJyLVVJYVZIUkY4M20ySGNxcmpXNGFKdzVaSHNuUFJ1T3d1UGc9
Z0FBQUFBQm9DbVhRRHcxejB3VXpYZXk4Z1JtOWxQR3VCUEJZcnAxYnN3Vk1ETFgzeVMxVUt2LV9ETWVwWUJmLUZua1BGcVRubG5TQzBXU1lmVkJETVdwVFhrTFY4MFpVS0xCcVJNYmVVQ2pPcUFkTXhtb1J6RHFmdF8tYWxjZjdMTEpYVE5WNmxLZ3JBSEtXQnpodEIyeTYzZG56c3ZhY2NLQ012V1AtQ1h3VHNUMlpQTE9xZTV2dGNTdGxKRXJJZ0hUWWRsbVdVN0hOZDM4OXNQZWFPOGRaZ0Q4b2JCWGpKUT09
Bullish
r/filecoin
comment
r/filecoin
2025-03-04
Z0FBQUFBQm9DbVhROU9ZRVdkRTF1VHh4NElTR2NUZU11MVB0VHoyZUhZX3pJSFhMN19hdk9VRmZtdEVQeUhfSXlhTVE4ZE1MYkx3WWNPek1oNlAzZUJpSGx6SmpFUm00MEE9PQ==
Z0FBQUFBQm9DbVhRRUF2LWtfajdUb1YwYlVvd0tRdVM1X1BzTHg4MHhxZmcweEctQVo0SE9aMXRHajRocUR5NFVPMGRITUxGb1FYNVpwN0FIbFZHbHlBZXVib2psMzR3M09jdzRLdDFRa2s4RlVfR3JRdG9pTDFfMmxoN3dURGNIZDRYSzllVTVCSFBISmduZ0J5TS1NVndWLVdaWlN3NVl2anBRaFMzaFVwQzB2SFJHXzZ5UUpmY3p2emVSclVxZmJUR19jTW1nd05rakhpVEg4czdEaG5Jem9GREx5UkRNUT09
Not enough incentive on the network to keep spinning disks for the sake of holding data forever. Not to mention having multiple locations, for decentralization and redundancy It gets expensive fast, and given no one is paying to store data. Who carries those costs? You do, the token holder Terrible business model and tokenomics at play here.
r/filecoin
comment
r/filecoin
2025-03-04
Z0FBQUFBQm9DbVhRWGIzOWtJdkpma19jeUdDamlJQ3J3NVdsZUcwWmxlYi13VVUwN1dtYUExSjAtdVhRM0VXRTI0MzN6djA1Vms2M1BVZmtleXFxZkRSUS1SUU1RaUNRUkE9PQ==
Z0FBQUFBQm9DbVhRcUdQOGdzTFBWQWlYUUxkcWxmWW43Y196UzFTaG83dXl2WHdzUjQ5dkZKM041MnlRTDRJRlJSM3h1VUljZHRQbzJhcnk4VW1xWU5LR0tmQjk1VmgtY2lwTk9lZ09CTWZNU044MXh4MDJmZFptNGI1amY5WkRNeDZZRjk3NmN1T0JIVm1ic0d4WGctbXV2czExbjJXRUd0MjNlWGhKcHotNWhUR2FkVFdsUVNFZS1XOVVXZEwxOXZmVU1TS2pqbFU3M29hbnJ0YnF1ZF9nVHhpSGJHUmxCUT09
Everyone has their opinion. You might be suprised.
r/filecoin
comment
r/filecoin
2025-03-04
Z0FBQUFBQm9DbVhRdmJmanRfakwzeEFZczIzVXhQc19XdzllVUNQQlJGc0QxVERzYkVEclNMVzRpRElhMUpacURmSzgzSTN6Y25qTUt6Zk0zS2VsNUpCQzhZNlpEZ0Q5dHc9PQ==
Z0FBQUFBQm9DbVhRNVg1cUJ3QnhiX2NONGVuU181YkxpVi11UDJteXBFUXhjUFI3akUxcl9WaW1jNDV4andoQWpVNTdsNG4zeWF3a29ZRDFUcWFqLTRXSTduOUZaR1RNU25MRWV3N1lOY09sRUFWNFIxbnRmTmtkNldQYmdiZXItOTdmVk1kUDlhTklqeldNbllWZV9ldUxfcmU4eVFUb29TaHM1SkNxRDMyVl9lRDRjTXFhNUtNblQ5X3diZlkwdXVjVEkxSDkxcUc0N2RIYm9NeGFhcFNqOEh0T0RvNXNJZz09
Cool now block the sub n me
r/filecoin
comment
r/filecoin
2025-03-04
Z0FBQUFBQm9DbVhRYnlEQmsyN2dOMDYtNl9pbml0MUwtMTJCNF9qLVZ2b1d3VDZNSnVqSWloYjM4ZTV3OVJrSUhLRXVBaFM5TjlTLXpTTjJRTG9BQ2huRDh1RURMMWFxV0dqYml4bVdycXhMWnRYaGRCVUtvNlU9
Z0FBQUFBQm9DbVhRVVRPTWxwbTZ6S2VBSmJpUzVsRHhPMUNteWRBRVJmVkdoS1p2VVhpNllvLUFpUkhITHJrblB6UHg3VVE0bTJaYnFWMzR2M2xzVGlaajg5M0dPSXVHd2IyaUpheExJWml1dTlDVlRFYmhxUmd6ZlVFTzlmQ0tfLWprRk1ZRm9oMF90RXdTcWFuM2g5ZTN1NjFsRDRLdVpHN1pWM0NpV2luOGY5ZUkxUnFrNDE0LXlHcWhfYnRTTTdOS19RejVGRVMwWXlzMU5VNU14ZkZMRmVZSF8zWHZrZz09
This is total bullshit.
r/filecoin
comment
r/filecoin
2025-03-05
Z0FBQUFBQm9DbVhRT0FNODBabVRtS3FScW0ybTdSY2VjMTBoMEl5Ykx3WlJjbGs3ZGtVcTlNUElsaC1DSWpnVl9Cbl82T0hEbHZlUW5ZaFVwcDZtaExOSDVFLVdpMHRka0E9PQ==
Z0FBQUFBQm9DbVhRVEItWUJ4eW5JWmN6TGs5SEljQ2xrT21keERXN0tQX1FNeGVTWHQyeS1oOEpzVE5ZTWgta1pPcUpOazFhdnRSWXJGb1U3VFNOcFZfblpRYnRTYzNFb3ZxOENXTUJNVE9lSERaSlFDczdKbU5ua2VmaXJLZ0Zvbk9paWpobURZQnFlVEdhbEdRRV9yQUJweXhUSHR6dDkxUGFtelhIUlNTQTZGVXY0ekJPN1MxclRyWHZKa2NzaTZ2amdRb1hqV3d5Uzk0QUFPN21HV2N2Vy1OQjlaMmQ3dz09
I feel your disappointment, sometimes the reality of the situation can be difficult to accept
r/filecoin
comment
r/filecoin
2025-03-05
Z0FBQUFBQm9DbVhRa0NQZUJHNnI0cVJHdm5GSWljN01qRVJTUU1hME5qTTVENjJfSlJfNWlxbW5hSHhlUVU1VUJiLWdfSnJkcERUUUJwc1NObS1xRU5VaXNTS0JhSUhpMkE9PQ==
Z0FBQUFBQm9DbVhRUlMtMkxFNVh1cUFRektVRUJnTzBGSlVoNjB3a29OX2V0TUFFeXVuR1NIZUNpUDRabmdsalpILUpfeHh6c3BXMmlsRlZOazVzSDZXS1pocTlPb25FbWZsYThZOTkwU1I2a3dwQl9fZ3JDVXFfTVJtVElkZGhsT3lCcGgyY1JwNWMybTczQnZQZkRBeUVuZ3hSRWlJRnBOcWN6TGViVEZfMTJPdUxQUExHRVZLbGNycjdhLVdJcy02MldXcS1yQS1ocjZIdldsejk0cm1mWUhFUm40ZnVoZz09
Nope, I’m here to help people and give them facts, instead of BS If you have valid counter arguments, let’s hear them instead of dismissing reality
r/filecoin
comment
r/filecoin
2025-03-05
Z0FBQUFBQm9DbVhRVkJiS0tzdC1qXzhzQU5mSFM2WWJzZFVyYVVsaWNfdXFnTHF0RktIZE85OTJVRWpGekstSWNkeDhoa25KMEVIN1pFQ2lnXzNpaDU2MThXY25iU3dDVEE9PQ==
Z0FBQUFBQm9DbVhRODYwbVhGQThhaXVJR2J1U3Ztcm8yOU1pcXZXTXlrakF5djNwMnFtbG45azU0VWtmdV84VnY1VENkYkZGNlgxMEV5cmM0M0g5eFVxYjdrb3ZxMUxXZVBpVElwMDhCQ0p4RVBpR1puRnhubnl0dzVNZGozdXYzdmJ0bjJRUDI1M3hpN3JHOGo2U3FaOExhbGpJanIwYjQzdF83TjJQaVBqLUlzRzcwZnNPa283UjUzRXllVGEyUU00SW5tbEg5ZHJRN045RFYxZkM4c1NFbzNiMGk5LXEwdz09
i hope this is not your opinion only.
r/filecoin
comment
r/filecoin
2025-03-05
Z0FBQUFBQm9DbVhRTXJISVUyTTlibTBPZS1la1Z3ZnZiSTV3b3N4bTNXbUY2N1NKdVFpNXY0SXkwVURNakF6dUN5Q0xKVWoxczJNeDkxbERzdlVMZ2F0YUtwSklzRW9TYVE9PQ==
Z0FBQUFBQm9DbVhRTC1QSE1IckZuWTMyLVFqMnlOUFhKZThBdUpyaFBVTmc0S2o1WjIxd2lFbkowTFg5T2NudGlKTEJubk9mbGJHY0NxaGVwTEU1WmZpUEZHeVRWcGlMZkpMUHdTMnVDYVpJM2dEWHFyc1RRQi1CVnc1MEpfZ2Vxem9lb0Q4ZWZPMzhNN0RDSThseVk3QUxqSHRSb1dvZU9jUE1QOVVJUENQbGVfQnVwdm1LYVFWVjdHMlhiY2UteUVSMXNVZHNXcjR2Zm9wTXQ3OVJkOWU4eUYyTXYzYVIzQT09
Feel free to change my mind
r/filecoin
comment
r/filecoin
2025-03-05
Z0FBQUFBQm9DbVhRLVlkeTBQZW1BZ3VyX3E0dllTSWZSTko2YmNOcVJNT1UzUmZhQktaR0ZOVUZ1WnBrTmd5aktYYUc1cThDNFpqZ2NZQmlGM0I3cTh0UkFsRFpGS3Nod0E9PQ==
Z0FBQUFBQm9DbVhReTNVYUxBbVZzWUFLa0dVaXNianZOczhqQ0tLSGQ1Q1Y1NW92ai1oNVcwdVZ2S0lUS0p4SzJiQzVMa2xnVXdlRi0xeDZjcDRMVC10eU5XV0loTHcwVlBjWXJFaWlSTnhKS2hMeVZDZ3RXQ3NoS3p4ODRzTXYyNFlTQnktMEdPVG1POF9aUVBZV0VBWDNoZHU5ZTcwN1hnZWhoUE01OV9SSXFDMl8wRjVHVFpDLThzOEpEUjFqVGRHdjNMekQ0WHBsbzcxSEM4cE1XaGt2ZHpfbVlwdUExdz09
So many big companies and partners like using Filecoin so obviously there nothing wrong with the Ecosystem.
r/filecoin
comment
r/filecoin
2025-03-05
Z0FBQUFBQm9DbVhRMHNiRVVlTkFHelBaeDVnblJqSW9Sc2N1Y0lxS0VodEZnWjI3RXFmS0xobjBDeUk2ZlRiSUxCQlExWk5JZ0ZVSzBnUGRUdXJGQ2xfZEFvVndtRXZxblE9PQ==
Z0FBQUFBQm9DbVhRVURzRzM4c1R2ODhmRjRWRXhMX2JzS0FRMWxkQ1IzTFhpVlRDWGZCM0E4ZHhLYUo4b0I0X2taMkVidm5QQmlKZGx4cWdaZXVteXA2NlozUGlDMmtXcUQzd3FmU0dYcVhzSS1uY1Z3R3lFVTFGcU1XQ1dONVRBQUExNXlXUVpQT1VOX0xZbHVEVmh5ZXcxYW04UHVlLUp4TmUwSVJqbHBoOHZRbTIwWnBOalJZZk1WNnd6UzlDSDRULWlLdTRRNkxzT1NYcGtKdTJjcDNaeVZ1Z0d6NDJaZz09
what are the best AIs out there for making summaries, looking for and research scientific paper and articles ? im working on my bachelor's degree, so i wanna ease the process as much as possible. i know that currently both gpt and perplexity have deep research, but i'd like to know which would be better to opt for. All other resources are welcome! <3
r/airesearch
post
r/airesearch
2025-03-05
Z0FBQUFBQm9DbVhRdUhnX056QkpLRE1neThDRUZuT1pkNU40Nk02VTJwa3V1d1VSZERTeTlBaHJRQUYtUXJWYm1sSUZXRzFjS1BzbER0VFdmMVlqT1RuUW10SE1YS1Z0Y1E9PQ==
Z0FBQUFBQm9DbVhRNll3cUNXUmtBbWJVcV9JYVFYTmVfZ1pGQkdIZ0ltaGg3ZldYVUE4cTMxZUluQWs2MWpxeW9HV3pYM3NKS1kyZlIwdHZIZER1YXFuR0Y0dnJTNXA4bDZxT1ZFQnI4M2ZxZ2RpZFE0SUhEUkN1b2R0LU5WUnZ3a0ZxcUloaGhSY0ctSXUwZ0ktcm9UZEtBQVFHMEhGRGE0MUNfVXRPYm83cEV4U3FRZUotRHU1TFhrYTlKVmxoaG45MkZ0NDVmTmVl
Who the fuck cares. Do something to help believers in this torpid project recover the value of their investments.
r/filecoin
comment
r/filecoin
2025-03-05
Z0FBQUFBQm9DbVhRUWs5RzA2S252N2tUNmxrbjJTU2dhSlNpOUgxZzBnMHFnTE5mUzRfX2I5SU1hcklVRzBlcHJDbGdDYjZyZzVxT29HdGZtcEFUUTV5RHdrb00yaGxnQk1JeTlQYzRnLWlwQzc1VUdaN0FkTkU9
Z0FBQUFBQm9DbVhRZU1XTk1jZHpkSHZLenotR1c5Q2tZdm5GR3dhRnJmTFU1SElIVGU2WXhZbVFGWTdENzVXWG8yWHNRb0FPa0JvNFUteGFXUlRxTjhxOW1GNm1sNEEwWmwxcWx3TktzWThvSVdVSnY4cEJUcTYxVkJocVdDZV9KT3pMSzV3emljR2UySXhVZHMwWGp3RDlSVzlOWW1seWFTdVZiNjFMMjJKNm5aaEJNRlZYak5Vc0hZOHREcnRBRllPS054Q1R3MG9Dakh6ckhVS0JhX05Cb2dFXzAzRlZoQT09
You only loose money if you sell at a lose. Be patient. All we can do
r/filecoin
comment
r/filecoin
2025-03-05
Z0FBQUFBQm9DbVhRRGVQRE5la1d1dTZEZGFKcWVxTUhPbW9MemJPZ1FYcHZCN0JYUkRoTXgtSWhiajFxZGxVSXVRTHZaNFV0QlpFZzlzbk40SnFIeElDYzktLXRWbll6NkE9PQ==
Z0FBQUFBQm9DbVhRbkdlS0FBdy1OQ2x6SUxlOGh4SHp1OWNSTzk2Ni1kRjRzbnFUbmtCTUVweFFnV2FrOGFhREZPbzhkbWpRMDFXX2pTdjBuR2RzWWtTYmRsMDBRRUJoU0JzaElMWlU3NWFZck1uZXN5WEZ3SlpkSGkydnR6MTYzWm9QOGVSbHQxSHRYdTJzLVZuOU1rZGF3UG83Q0VTRDFhNkJLRlNFbFZYRFNsTzhKVm01bVlWeWRBMVM5T3pYZ2hEUzFhcF9aMmYwRmFWZTRuYmQ1clFYRlVhZTJLNGpodz09
How do you have this info you must be an insider?
r/filecoin
comment
r/filecoin
2025-03-05
Z0FBQUFBQm9DbVhROW5IeEpsakdpTmJveUZzd0p2MElndnI0VWliRXo5Y0J4X0JGUFNHX2c5VndtWHBXdlpjM20xU01IUmFYZHV5bDFhTHNGbVRWMUE5czQ1djN1SUlVOGRlbGZUSHh5LWZGXy1iN0xKellTX2c9
Z0FBQUFBQm9DbVhRSG9ELTFTOFBaRXBfZll4Qk5rRXhzdWVhZ0hQeGw4SmQtNXlYWVJJNF9ZRlBkYkJPOGxZWnRueFljQ2s4c0RCZUNkM0hUUWFJZzFEUmhYUER4M0s4UkFaMklqRkxUdFc1d2hPUVpiS2RCMjR6RjJNV09TczhyT2JtU1RrcHFfbWk3U3ltM3d4RFFSNmxkT2FnN2ZSUTlfaTdiTVBtamwyVkZocjRwRU82WlBZPQ==
No more meme coins lol messing it up for pepe
r/filecoin
comment
r/filecoin
2025-03-05
Z0FBQUFBQm9DbVhRVFRKdHNma2xLSkFTTXhCdlk0UXdXdmdYOVphUWdWeklnS3hGN2FsX0RWOTNTNkdTVnBhdC0ySzBDcEdQd0VBMkc3c1lQMWktMWhJeTBEVmNYVmlVeWc9PQ==
Z0FBQUFBQm9DbVhRZEQydlFKU0o5V1dDdk5EcUd5WGQwdTVHU2hIeFB3ZEQ3S18xeWRQVEhzZ2dJRVA4WTd4TXZUbzNEdVh3ZkpWU2JnWGF4bFlEWXRPZkdGbXRqcU9nYlVDUTZ6bVJvME1OSW5SX3lqS25TODFvQkdfbTJjUmg3WUlBcHlNZ2hHRDI3dEVmTThfNWxKVGNqa2FhU0Ewbmd0elJqOFVvLUlNeEVjTTFmOGRTcTJNPQ==
Bullshit more like. This is a solution looking for a problem.
r/filecoin
comment
r/filecoin
2025-03-05
Z0FBQUFBQm9DbVhRdV9QNG1yUnB1Q1NheXM0MDMzN0JRcm1CTTloMGlHTi1UeWIyd3FVN0c1QmpnVnhfZzF3eHlxVjdIVjdINzhPdWk5ZTBZWUJuaEh6UzFzNklSeUk1TW1HTFFoUzlFWG1MdzRpdHlhVnZfNTQ9
Z0FBQUFBQm9DbVhRaDlPNnFaUGZER2JrU0QzU0lGQmtDU0RpdnhES3hKaUxGejBhb0RVUHBoeVJBbE5NSGlJNVZtUXNvbDV3WDNESVNKVVR2cnVwMm5FdXBudF9WZnZjMHJ2Vi1zZTE4NDBJWkFlOHRKajBQdldLMnNneUdRSW93cVN2aE1iamRfWWdFRHJVSEhaZDRmSW1jSVhsODNYbnU0NGV4dm1vSEpBSlJ5LV9yb1BTT0V3MnZ5N1A3NEhaX0t2TDgyRVdGWF9fMk1LZEo3YVhjRlhkZ0dha0Nhb18zZz09
MAMUT introduces a systematic framework for generating math training data by modifying existing formulas to create new examples with controlled difficulty levels. By parsing equations into abstract syntax trees and applying constrained transformations, it produces mathematically valid variations that can be used to create specialized datasets for language model training. The key technical aspects include: * A multi-stage transformation process that parses math expressions into abstract syntax trees * Five types of transforms: variable substitution, constant substitution, term addition/removal, structural transformations, and complexity adjustments * Mathematical constraint rules that ensure all generated variations remain valid and solvable * Difficulty controls that allow for targeted generation of simpler or more complex problems * An evaluation framework comparing MAMUT against GPT-4 for formula generation quality Results show: * MAMUT outperformed GPT-4 in generating valid mathematical content * Human evaluators preferred MAMUT-generated content over GPT-4 in 72% of cases * Language models trained on MAMUT-generated datasets showed improved performance on math benchmarks * The system successfully generated variations across algebra, calculus, and geometry domains I think this data-centric approach addresses a fundamental limitation in current language models' mathematical reasoning. By creating diverse, valid mathematical examples at scale, MAMUT offers a pathway to improve LLMs without necessarily changing model architectures. This reminds me of the whole "data is the new oil" perspective, but applied specifically to mathematical reasoning. I think the educational applications could be significant too. Creating personalized practice problems with controlled difficulty progression could help in adaptive learning systems. Teachers could use this to generate homework variations or test questions without spending hours creating them manually. The framework does have limitations in handling word problems and more advanced mathematical domains, but it provides a solid foundation that could be extended. TLDR: MAMUT is a framework that creates variations of mathematical formulas with controlled difficulty to generate high-quality training data for language models, outperforming GPT-4 in creating valid math content and improving model performance on math reasoning tasks. [Full summary is here](https://aimodels.fyi/papers/arxiv/mamut-novel-framework-modifying-mathematical-formulas-generation). Paper [here](https://arxiv.org/abs/2502.20855).
r/neuralnetworks
post
r/neuralnetworks
2025-03-05
Z0FBQUFBQm9DbVhRTFhTYV9ZSTM3cVItUk1xVElFandVQlQ0Z2FfZ0RYWjlDcVFTTDRzeWIwOEVSelZwVFNOVUlMZG1hTjc2Q01wZkpaWl9OTHhYODRXWjVxeDdRUjlWSXpfNEN2RWliWFo1dnk2YlU2Q0tsZjA9
Z0FBQUFBQm9DbVhRWXloSzJzTDJfSElLeHpEdmYzaDFRODhER2Y1M3dEZUNhQlhDY01oU1VaUUJ6Q0RtMGZzbWJ0T1RsTW1DblFCMnpvLXpVSGlYbk5BYmlTWnJTZm05RE9iZklJelE4aXlVMS1CRmRHZkdCOFhndXowSTVsUDFWaVJlQXpRUE1YM1c0TGZ3T2VnUndjQnE0RHhlWTYwZUJUdUtGQnh4X1pHV1ZsTTFZZXAzc1MySXBoNUV1bHBtMGFYcGxldVdpS21hUWFaR0VmVHptTWdPZlJYZUR6RkJtZz09
Hey everyone. What is your go-to e-discovery tool and why? There are so many and yet many of the old players are still around and continue doing their thing. Just curious to get people's thoughts/experiences with some of the players that are newer per se.
r/legaltech
post
r/legaltech
2025-03-05
Z0FBQUFBQm9DbVhRLWJ0dlFHVEllcEp1Uy0yV1lva280ZGFPTFFDU0xCbG5vMFpZUHk4ak5kZGg5MVltOVFMSDRwU2UzWHRkLW1WakZzc0VGOG5ZLUxBeEpudF9PdlAyV3c9PQ==
Z0FBQUFBQm9DbVhRMVpCXzkzRGpINkZYcWFiZGdRRDMxSk9FcFc4TjNQNzZBeFdDamItQWJPWmwzS1RiZE52ZnZiRFdZNjlyYXJvR3hOZkdZQ09jVFk5WkJOREhXT2szcFdyVVVaV3N3aEVEY2o2eExaM2FySU9yODlnUVF0SFc1WmdIRnYyMk5KeGRuX3F2ZjZsYmU2bEt4TWJ0c0Z2UUM1ZmNILVBIU2s4N1F4Qm1Sa3d1LTczRVVGaXlwNkFXMWRsSnYxNFRSYnBL
Maybe storing Git data, could be the problem?
r/filecoin
comment
r/filecoin
2025-03-05
Z0FBQUFBQm9DbVhRTHJvZlFMX0tOSHVZUmVOQm1nb1dMOXlQeWsyblpkM2tUdURhOUJIYVBWczlQc0Jkc0pUbkJGbDVVQXBOQW9ENzRkSTZTOTlNY3cta3J3SkxIdzFrdUE9PQ==
Z0FBQUFBQm9DbVhRU3ZXMzFDampkcE5BUG52amRpdDN3TjE5bVU4aDk2eWI1eE50YVdnNFhtYzhrX1V4Wnd2UTExNGZaYTFCdmNzbkh1b1Zxa1NnTFNzb2ZHWlZ3VWtFSU8xMmhwa095M1ZDaXhPN3g2akx0Wnk4UXJScWYxVUx0VU9nZy1TOTNPTTZfNklhMUFONkxjbEZINmxJVExjTnRra0RMQllkdzc4MFlRQmpLYk9INlFEQ2JNN2xVdFk0XzBOS1ctaDR3OTNFWkpnT1pLeDBsLUtvaXhEcloyLVJ5dz09
I've thought about how to learn about legal tech given the dynamics of the space and from someone who isn't a programmers, not a coder, and who has grown to love the space, but that took time and effort to do so. Here are some resources that come to mind: * For books, those include AI for Lawyers, Legal Operations in the Age of AI and Data, The Legal Tech Ecosystem, Data-Driven Law. I tend to like books that are less tool specific, aside from AI, if the book is more about AI in general and less about one AI tool or another. * For blogs, those include LawNext (Bob Ambrogi, the go-to guy for legal tech news), Artificial Lawyer (Richard Tromans and good for legal tech news and analysis), Legal Technology Hub for product evaluations, analysis, and a robust directory, and Legal Evolution for long-form thought leadership. * For podcasts, those include The Geek in Review, LawNext, Technically Legal, Dear Legal Ops, Five Star, Counsel, Future Law Podcast. There are a lot so these are just some of the ones I tend to like the best.
r/legaltech
post
r/legaltech
2025-03-05
Z0FBQUFBQm9DbVhRNlN2X18zZ0ZxdjB0OGc5VEFfQkRlbEVoeGhpODlNUy1tckRWVmwyQS1oenBKWnY5WmRjbm14bjFkeEVzRlEyTG5CcG4wdThUX21vNWMxMWsxMWJKOEE9PQ==
Z0FBQUFBQm9DbVhRZ09TQTRlOFhjQWU2alZzMHd0cEVVamZzLV9NNTljWUo0ZFpyRi0yTlNrTHhJQ3hUTFNMR0VoVTh6YmR4UW5DRXBHNFhpUFJuVmV5UkdDdXBaaGFFcmlpMjJSbklKVHVuMnpIWWZoUVJGUU5OVVl2R2I5X0ZzSUpnQjdPMERwS3JrdWxFRzZGM0JhbXo5RHo3T29JejVjdnUtaEYwWHZTUzc5eHQyOXhSbjBiUVpJNDkwdmlaS0JnYUpYQ0JRbmx3
100? Not sure about 10
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRSkdFOGRsRkV0SmlMU1hiczk1YUk0M0R3cDNlcjlJYVB4WTdnc2ZBTDZFY2RLVlhMNGhCbFRnUWdmamZweHd3TVdBMlVsUGU1VGtVWk9WV0ZJZzZFd1E9PQ==
Z0FBQUFBQm9DbVhRTEw2VHFKRnNUczMtcFdQcTNFMDJYdy0zWFVKeVVzUFh3VVFGN2xYUXVrTU95Umk5blpEdlRzeWcyeXBkZklmMWxoeTJ5cTdTWUo3OTFYUW90clhMZ1A5YWdZUW5hcGl5amtZM0UyMnA0QWNQWlFpV2NwSmE0aDdPZ0ZEVXF2aHFfQlp2X3AxTkhYMEppOXEtNk4xcnF2LVBQNWxiZ1hMWE96alBKZHZTbk5TdU45MFhlRHZxQXh4ckg4SEdrU3ZQVjJFSmZza2tSalAwQjFhRXdtbHVhZz09
Hi! I have an LL.M (Master of Laws) and have worked at a document and workflow automation legal tech startup. I am looking for a practice innovation / legal tech role at law firm. Have been getting some traction and hence I need some interviewing tips and mentorship. Any suggestions tips or help is appreciated.
r/legaltech
post
r/legaltech
2025-03-06
Z0FBQUFBQm9DbVhRcEtlN2VMUm1OWHhvVUNkcmJTVXNYZGpkZ3Z1UWxESUhnNzFpUnAzaXIxeTJkVy1TT0daVmNwR0RKRHJKU2VzRkNPVVBzaVJHQ0s0dllycE9nSjBNSFNUdEVGQXpKM0NFZi1kQ2xzT0xEZ2M9
Z0FBQUFBQm9DbVhRN1oxQWJtSTdQRS11SDJQNk9fWnp0d01weUtZbThoaUFnMHg0b2tVNmx1Y2V4dGg2VnY2V28wLXdCTHVsY0k0SlR1SzM2aVZ2NWhrb0JJUUNsckY3NE9QY1EzT2NmZ0hCV2RDZ3BpZGJabkxpRWFZbFVCUG5aVWFPWVhZSHBxM1puX2JqV3diaTlRNzlROEhFc3NKdXROX1c5TTd3S01ZVEctcl9OR3p5cEFRPQ==
GitHub?
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRYjk4NDE0UFk3aktTenB1RVQ1S2xoazMycGxNcGJuTUpESi0tNVNGal9rcXVnUzhSb21IcWY0eXNQN0Q1QUx2SVh0MU1tX1lLdlFOM0F4d29hd0Q5RkVubTRqTFU3Y2tGTGZUUG0wVl9IYUU9
Z0FBQUFBQm9DbVhRb2l3Y2dRX0tJLUhtUkpBS3pyOWRna2h4SVF5bE9teUV2c0NQamlnY2pEVVBlb3ZWekk0a25sSzFWS0NFVjc4OGtkMzI5Ym9zdExXT1JIYWEyTnF2cTVZZUtZZXFvRmVlSXd0dnBxN0FvRlVuSVdJYnlEeGUzUWhuZ0ZqVVNwMGlpOWdORXhNTmZVS0hid1h0NGZWcFpUWXlhSm1DbmREaE9CUEEtMURNY0lWZHBUeC1lUVE2cUlJeXpvUDRnaXp0MU91c3RwNGxOSlE5VG9mUDltb2ZuZz09
Best I can do is 12
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRUks0SGtxSU9PYnFpMTVlWjlaTktaak5DS19xaEpwMUg0cVBrWXd1NUVQcGtVVVBYWlZKT01RLTMwM193WmRONkNxOTJJUzFQdHpwUVFpN2ZhcHo4Ymc9PQ==
Z0FBQUFBQm9DbVhRc1Jyb1VGVlBIdVY4UEM1QkhnSG1SeERuMDZVVUFYN0pOOUFZZThSdzNPY2JhalE3NHlhb1FNU05EMTZZNmFHRmtMSk5DNVY0X281V3hKRkNrLTRNdktrT1g0TU1fVlQ5ajJzMnFmaVdzUHdpRWVVQ19oYU9Zb1F3dUhvNUROWXV5M0Ffc2tac0ptQkJEbWFzb2s5eWJkdFhVOG80X1BHS1I5emtrb2gzU0RkR0xiSVdaRE5PZV9OT0M1cldXcGZOVXp2eXd4Wjdud1ZsQkhBZ1lKbEl6dz09
Best thing is 28
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRQ1V3dXY5WFhUdHA3RGY1LU00dVpXUnhLUzhuMFlKQTI3UVFxek9PRmZoUmROWFoxS1RnMHAwUHl1ZzEtZnlON2ZUdjBBODNvQ0Y1ekFlYmFZWFVEbFZpYVBHY3pLRm5PMi05QlE5SWJSVTQ9
Z0FBQUFBQm9DbVhRbFdBWm9DTml1dmZwcndvVTNIQzYtN2ZxTFVBN2pUaU9NYW5RSE8yNmVjOVY5RW9wQTBwLVdMVXZreW1EMm5ZdmJveGJ3bjZabzdMNnRTVEZMeGlUVGZQWUNNZXZMN1RxM3Q3NmZaYW9uVmxzVVhOZW5NcnBxckNndlVWdERRTW1DWG5TVTVzNzQwb0R2Q0ZxOUZKbGROeEZ6UDhvNjdSOXAydGZ5dzNrVnRkZFdOMFZkR2lXTGdWVDNGczJ5ZTZ3N1RxOUJpb1dsTmRxaEdkRTZBN1lwUT09
Seriously?
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRNWZiNkpoRUxyUFgwZnFnVExCLXd2RDdtZmdkdEw2R0hiMUY0dVNZTTNFRjhPRFR0MTNkVVZCT3VkX3YxS1dTRk0tM3l2dlNXRFY0dlhPbUVrTHFLTUE9PQ==
Z0FBQUFBQm9DbVhRV2FvQjdNSzg2bW8xcndJRl9GdGxQYlhqU1pxem5YTmg5b2FPRmU4OEhxLVhlVkdEMkJPQ08zZzNSdmJqVUZ1dVAwcFRnYUJ0VG1VcUl0YlctS0RucXZmeS1fdG8xN2hkVVlkRUdyX3VnNkZxVGVQcjlVdFpISUh6a2VtN1gtNk1rWTJhRW54M1Z6M1FWNUtmRVBxZFNVUE4xbW9Wc3dRQXRQQ2Fsd2doNklyX3JyMVhGN3dWeUc2SWQtSHFFbkhNMHpLQnZLUmxoRGRBazlOQzR4VTlnZz09
How did it launch with such a big market cap
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRRVg5UUNzNnYyOWxuUHlQSDZGOUtaZTBtMWJqYjJZdnh5OGhELTBsZXBJSnprZEtRUXExTUdHeDhmYUJEQlZNZXVKZV9kS0czenEwU29VTXpXVEp3Qnc9PQ==
Z0FBQUFBQm9DbVhRekRQLXNLalktWWtBZEMzQnhWdnYzSXlRb21FQmVnZVpwYThLb0JET2Ezdi1nR29IN0ZobHN5VlBESlFwdV93dWczQ3NmaXc0WWxQVnJWMXFyMWRNTXlEX0Faa1ctaGpOejNzZjVtdnR6N0psX2M0dW5Ka1oyVzNoU0dnT3FzQUd1UW5rdHhnc3BjTGRLdWdIMXlOUmZGWmRjZUR3TGJxYTBwSklONzRpMWpZPQ==
18 only please
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRY2RLcjljcEpXV1phenQ5RFJyZndZT1FvVGtMNlhoZHZ4Uk5RcUt4ejF5WHIxQTJKZmRRMXhmNFZWVC0wb05rN0hhR0Z6U3o3X0ppRHNLMVNLOV9nNnJ6QU0xZVpVcFJMVVlsdTNsY3JrN0E9
Z0FBQUFBQm9DbVhRRGdDSm5wRXlyT1hNZTdkWE1pcEJYaUtRX2VpSE54LURILWJYbGhwU3ctTVZHTmhmYm13RjZCd2owZTI0ZG1id19LTWVpR1FMRU1jMDdmeDZ4YnctbkNmTGcwT0toS0F1SUczWUxwN3EzbmlhcmFHVnhmaWw2M3NLNG9rMUtOMFpWZkFhTW5JRmFiRDN3RktDaWczMGhCdVRZR3VXdklHd1JzNGQ2VWRzaTMxUzhDUmtFQlJrSU83THZSajB3ajVPbVNibnRjaS1NWFZsOEhJUU1BSmk5UT09
The SemViQA system introduces a novel approach to factchecking in Vietnamese through a semantic question answering framework that integrates multimodal processing capabilities. By transforming fact claims into questions and using a vector database for retrieval, it achieves both accuracy and efficiency for Vietnamese information verification. Key technical points: - **Semantic vector database approach**: Uses Weaviate to store and retrieve information based on meaning relationships rather than keywords - **Claim-to-question transformation**: Employs GPT-4 to convert fact claims into searchable questions, improving retrieval accuracy - **Multimodal processing**: Handles both text and images using CLIP and ResNet for visual feature extraction - **PhoGPT integration**: Leverages Vietnamese-specific language model for text processing - **85.33% accuracy** on the ViQuAD dataset with an average query response time of 1.78 seconds - **17% improvement** over baseline Vietnamese QA models I think this work is particularly important because it addresses the significant gap in fact-checking tools for non-English languages. The vector database approach could be adaptable to other low-resource languages facing similar challenges. What's especially promising is how they've managed to achieve strong performance while maintaining reasonable response times - crucial for real-world applications where users need quick verification. The method of transforming claims into questions is quite clever, as it essentially reframes the fact-checking problem as a retrieval problem. This sidesteps some of the difficulties in direct fact verification. However, I'm concerned about the reliance on proprietary models like GPT-4, which might limit deployment options. I'd be interested to see how this system performs against deliberately misleading or ambiguous claims, which weren't extensively tested in the paper. The current Wikipedia-based knowledge source is also a limitation that would need to be addressed for broader real-world usage. TLDR: SemViQA is a Vietnamese fact-checking system using semantic vector search and multimodal processing that achieves 85% accuracy on ViQuAD through an innovative approach of converting claims to questions for efficient retrieval. [Full summary is here](https://aimodels.fyi/papers/arxiv/semviqa-semantic-question-answering-system-vietnamese-information). Paper [here](https://arxiv.org/abs/2503.00955).
r/neuralnetworks
post
r/neuralnetworks
2025-03-06
Z0FBQUFBQm9DbVhRbjdORmNHODRPNWNmc1FoSkp0THBPV2dTSlBHeklRcE1NOW05LWtvZUs4SGdqZDRfTGJ4ZVVadHNuXzNaQjJmeWRrQWw3aThsM041QXdkLV9JaUVGYjNUOUZLUGcxWnlJZ2o5M005WS1YaUk9
Z0FBQUFBQm9DbVhRYUdiNk83SUVUeXFOZ2kwRk1VSkpDdTR4VHZnVWk1TEJPU2Q4b1R4djYwaWZHZ2xTcks3bklTTC14enRIRHYtRGtLeGE4TjdkRUpDR0M4bUZ5N2NsS3VlSjZaTDFrVkwxYVpoVlJId2lvbi1leGxfVXdOOUo5OGdNcDl4b1BZX3ZlRXJ0VlFlRkhhSFVrNWZXcDRaNmxjdjJYa2FJTEJNY3RVWXFqZUFfVkhYMnZoVW8weXVsS0tMRzI5aWVrT21NVWpoR2ljcmtxTTJZVDNnT25SamNaZz09
Probably not..Maybe 25 when Trump loses his presidency by some unforeseen miracle 🙏
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRUG5yUlJBSHJIMmxMWlF0Z3cwZWNfQzdzSk5ienBaUW1nUTdnZkV4M1ZidW93dE5SLXRZdlQ1Q2QtU2I5NThaRlRuTWlVV3c3eC1XRldLbE5aVlJQMGc9PQ==
Z0FBQUFBQm9DbVhRTjRNUG5idlZ0dnR0b3gzaTdhdkt5Vk1aY0NhM0dFZ000UFNIOU5pTU9jc040alc5NkxYdVZYbGctMlU3UldoLVNyLVljdk53ZFpHdDl3dDBFeFFCMVZOLU9vLTAzR2JiWC1aVkJQVV9UN3lFWHJoR0FHYURRWVRZS0RlSHJfcENuU2l3aENwWmZrZml4Rk5aYzBsY1Vtb2dncmNpSHB3OWt4QWpuN1ZYV1FoTnp6Y0VlMzRZc0ppci1FdjhNTktCU3VWcEJqZnp1Y0RlM0E2SHFlS0hMQT09
The brain learns by continuously adding and refining data; it doesn't wipe itself clean and restarts from scratch on an improved dataset every time it craves an upgrade. Neural networks are inspired by the brain, so why do they require segmented training phases? Like when OpenAI made the jump from GPT 3 to GPT 4, they had to start from a blank slate again. Why can't we keep appending and optimizing data continuously, even while the models are being used?
r/neuralnetworks
post
r/neuralnetworks
2025-03-06
Z0FBQUFBQm9DbVhRaG5nTjZtQ3JLQmhxNDZMVXEtaTZYb3pRQkJjZUVNWEJ3eXVVRmxsWnczMThPbFBuSVMzNlV3c09LN3N6N09MRjlwSnZZMEtjVGdNcGZURWpZQ2JNUkNzdFhOZFdHWGoydjBRTE5reGVseHM9
Z0FBQUFBQm9DbVhRdHZ4cXp4YWNibVg1STJPbDdkMFFoa1N3bGNYN2ZWcFZzT3hieUp0TGFUZVBQUG1OaEtWY3pkYVFiTVJtLU1GaEk5SFVCV0lLdHlxTEJvQTBKamxYamdBdHNqWVlpaVltM3AzcTFCM2s0dDI2TFZGREU4cEFzUVN2NTZ1TWtUN3VDYWdjYmpkYXRJQTlSVkhwRmptQUF1dG1GU1B0WjJxY1pvbzhqUDNIN1RSa3g2am5NTGVaNWZoZ2NFclEyb0xP
I'm not an expert but I've had some experience with neural nets recently, here are the reasons I can think of - the simplest transfer of knowledge from one model to another would be to transfer the weights and biases learned by the parent model to the child model but they all have their own architectural differences and this makes that difficult, you can still transfer that knowledge via distillation but that's lossy. - even if the architecture is the same, all models learn some biases based on their dataset and training methods, if we keep transferring the parameters to generations of models they'll most probably be held back by all this generational bias and not learn patterns from new data well. You don't always have to start from scratch though, if the new data is not wildly different from the parent model's training, you can fine tune the parent model, use LoRA or knowledge distillation to just teach the model about the new data and extend it's capabilities.
r/neuralnetworks
comment
r/neuralnetworks
2025-03-06
Z0FBQUFBQm9DbVhRNVRiVXUzOFFMQW9aXzZSWmtKUzBTTndhTERWdUQ1S2JHX3VrVUU2UzdBZVhyZ2Q5VWpsNWIzcjRSeXZwX0xORGhhTE5fTWFOa0h0dHZyLVZXLU5QQnNkZTVReU1WMnp0ckRzWmMzeVlaTDQ9
Z0FBQUFBQm9DbVhRdWhnVVpaMWVqVkdaYklqTkdMYXN3ZVgzaHVjVkdJSllqdDE2RkdSdm1oTFV1WE0xTU9fOTE4aVg1WXRqRHZjeTFlZjhqdVlHSWMteFRRZk1UdTFWU3JJUVFob2RiR195Xzc1WGV1UVJxT2t1SGE4LTJoNjFOTXRWOW1DYlp6TnQ2REhGUGkwek9SQ3VwNDlla2RIS3BOMGhhMnR5V1JNTGY5cENlSElNMnMtZVFLdHU3ellaM0xweFFReEp2anhRY2ZpUTlyTGpLX21MaTJCdF9zNzlUdz09
I've been wondering about using an external queue saas (such as gcp pubsub) in my project to hold webhooks that need to be dispatched. But I need to guarantee that every event will be sent and have a log of it in DB. So, I've come across the Dual Write problem and it's possible solution, the Outbox Pattern. I've always listened people say that you should not do queues in DB, that polling is bad, that latency might skyrocket with time, that you might have BLOAT issues (in case of postgres). But in those scenarios that you need to guarantee delivery with the Outbox Pattern you are literally doing a queue in db and making your job two times harder. What are your thoughts on this?
r/softwareengineering
post
r/SoftwareEngineering
2025-03-06
Z0FBQUFBQm9DbVhRc2JKSHFQNkVvR0VXakEyazZzek9ZdmtZUXpxd1VvSWJsOXRUcDVHWEdpNVhEOE9ROWZhWHdrUGxsN0hMTHFUWkJsUUNHdzFZOE1VM2tGRlVxMV9adHc9PQ==
Z0FBQUFBQm9DbVhRZndZeVN1MVlGZC00OU1lLWNPQlAyVUtEOFhTeGhzODZTSHg2dTRKU0gzU2dDVDVYT3RTOUtDbFIxbzhsdnpFSjdBU2dMM1p2ZmQzcDJuTW44S1N6dFduOFl4ZUxyRXRsSTR2eTdIS3oxU29QVG0xbUVBajdtLVNKTTJFbWdFWW1LbFY2aVBXOUNXNzV6bXBORVF6bjdJVXYxRWMyY1ZGMllycVg0bmgycnlzRERNX2JYVDNlS0pIcG5ZTjY3TkZ4aUw4MG1taDgzczZmbHpyb2hMLWdwdz09
Well gpt 3 and 4 have different architectures e.g. higher dimensions, more layers etc. So there is no 1:1 mapping possible - maybe distillation, but then the student model has worst performance that the teacher Within the same architecture ofc we can do fine-tuning, but that has its limits before overall performance declines - in a way its just different initialization so it's good for small changes, rather than big updates
r/neuralnetworks
comment
r/neuralnetworks
2025-03-06
Z0FBQUFBQm9DbVhRTVYxdk5mS0JuTWpUQ2xJeGNzTTBGMFlGaGJwdFppN0xuLTRXWG5ES0dvV0d3dU9TdUtHdl94ME1wRTZrUTRmT3JvMlByb1dVR2dQRl9FazdxUXNJY1E9PQ==
Z0FBQUFBQm9DbVhRQ3ZFT1VWZ2UwektvdG9fVFJzbGM4RTNHcnltRzJJZXRrRlFpVnozc3Z2OV95WHpRd3A4R3dweFVhVThsUkhhcUxSUnFvY3lRODQ1ckVUU0RKd0JURWZZWng1d01YNzZjc3doaWtsRGExRkJMZURiTmUyLVZzRkc5blFWdUVBRXhpdzFNTVJuZV8td3pNOEZzZ3U1WWY2OU1TYVZaQTN0dEp5MWhfellwLTlVTEgzRWllN3h2ZDhrb29HZE1iODVHLWdaVEVWSkk3LWNLbVF3c2FNc1FoUT09
If you wanted to train a maths prodigy, would you rather start with a baby or a 60 year old?
r/neuralnetworks
comment
r/neuralnetworks
2025-03-06
Z0FBQUFBQm9DbVhRZjZuMWlVeUQzRXI3bXVfNVBkSXRBT2UxVkJRYXlpM2txbHk3QnFHZUxvazJkb3hTZk9iWHpKRVNQaUQtZ2pFTU1mby1mT0Rqd1RQMW93cC0wa3NONFE9PQ==
Z0FBQUFBQm9DbVhRcmlJeVRfSG1xNjhVY0RveHRGc3JRdnlSem1tVFNLVUFmakRVcHpzQ1E5YlZDZm92ZGFyTU9zcmRBT1RLbmR2Qm4wTEZyanpWUU1iVWJmTW9FUWE1V1NGMW9Tbk5PWURaMXhOLWxLQ0F0NTJYYUk0MzZ5Q0JWS3FGMVhSOWUyYXY2dE1lSUV4NWZKVE5XSklHQk5nSm85Z28wQUpDN0NMcDRPYzNrY1EweWRUbnIzclBKWXRVZlpneFBIOWtZTTZwUWh6a2EtY2t2RE5kMWpkejlJdFh2UT09
Nope, they keep adding more coins to the supply every day. They will keep doing it untill your bag is worthless. …
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRVVBGbzRwRlFFcXlDdGxmeHBJY3VpREpfTzR3aVFqY2tMLXI5ZmR2V2dQMjNlRlhmcmtQaE9mMlVTbkRBc0J1RmYxMThnMFo5cUJkRHkzaDY0QzUwQUE9PQ==
Z0FBQUFBQm9DbVhRbXJBb1cxUjdEWmdmZFpXbFlUb09VWUVlOHBsS3Fla2NXQkxSeDhYcVBCYTMyQUNCcV93M3Qxd0RURXZ5QmhGb1o5N3lyQzhXTFZXcmRiaC1MOG12Ykk1Zmh0dXZZX1g4NmVsMkFiNnkzYk9xc3pDQUJGeG5vR2dWaWZTUWZqSVJIU0paYkdYYkREZEZqYUtmQUtIbWpLUXdIOUJtbExzYXBDQ3VoVTMtYUMzVnVNbnVYcTNCUHJRQS1OUlFWaUtwcTlUaHZ1Q1d0clItMGxjbF9jT0JBUT09
Not anymore. 12-18 if the perfect conditions is ripe but that’s a stretch even now. They need to fix their tokenomics f
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRTWlQVnpaVkJJdUd2ZXZNMVJWOFJ3ZXFEWk52STVUZzI0Q0cyMVVaNTI1NkRESnF2YmhWSTVIbzhJVGtDVDhoOVcyOEE0QXhLeU9mcVpEeTM1WUs4TlE9PQ==
Z0FBQUFBQm9DbVhRTHc0eHRvTDNSV01nWUJlNjVlNE5hdEhmRFpiUHUwNHVSa1hmLWhoVWp6MWtYY1BGZDNyd0Qwbk9PczU4WGEtdzc1OXJGekRwbm00MklkMGVUNU8tX0JxNTJvaTBIODJSTmY0b194bzJRUFFKQS1UaU9Uc21icS1VbDhsYW5hYXJwN2M4bU01RmYwV0pNSjBFSFJNaERjYk5aanlTMC1YT01iS1FrbGtoVTVwRnRUUTVDZUh0NW9tNHNWdDZxTjcwZkRZRHB0MExMRGMtMGxEOGZyTVdnZz09
we do, look up continuous deployment. when we sleep we undergo synaptic pruning, memory consolidation, etc. same deal with continuous deployment.
r/neuralnetworks
comment
r/neuralnetworks
2025-03-06
Z0FBQUFBQm9DbVhRU3VTaUZMMWdweUpnYUEybENBUmtHR19fUGRTWGU2ck5vREdjLVJPS2NoVi1VUkpYLUEtaUVMWXRaX0IxcEk3OS1yTnE2eGNibTROSHVMOHVtdGhhQVE9PQ==
Z0FBQUFBQm9DbVhRTHlRNGlpS0Fsd3NIcE55RFhBUzQxaEFyZ09odFZSSmwySVdNc1V3VTRqTFpuRV9ISi1PTTdHdkVRY1RERU5HMURPUGtKaGdxOUpodEUwYU1Zc2hSMkctWDVocm9TblhlWVZIekhGWjB1TTVzbjRUWUZGNXRCZDRvOHJLemYtamtLdndxMFduX01mTFBwRkljeVZTQWhJNjNUblJHeGhfdDhXRnN2OUQzM0dnUUwxZXdlWnBSUXBUQXVnNVg5S2lIWGFwdUxYbDk0Z2toQTVmMEtycFgtUT09
I sorry but you need to reread wrong information in your comments
r/neuralnetworks
comment
r/neuralnetworks
2025-03-06
Z0FBQUFBQm9DbVhRNjBGUjVKbFBlRV9sd21QMUdqWUIycVQ3ZFpvZURrYUZVanBRekQ4VkNYcGpsdkpUMXBMUm4wa2VwNVNOd3ZsN2hrd2x4OXhycG1xWGNxelhZalVmN1BfWG9MRlIwVWJOdEZOMHdheG5LQVU9
Z0FBQUFBQm9DbVhRTnhtN3VMMUJZc0w3cS1EQ3ZzRHJ3Ym0tT0RfYmw4Y0pHZE8zbXZQU2VWY09EeDdvOXE1SkJOWDFYNzI5N25FOVhlNGF3MVJTRm15eDJtZ3pIZ2UtWERxX0dmaHdUcDVLcTA0QzVseDlJcnluUnhYOS1icUdvR3NNWUt4T1NqcG5sdlUxOEVkdnhYR2gyb1BmMi1pUGtxa2ZxSmxXdGl6R1V6elZWWnk2TmZ5VWJuM0VNbFJ3REJaNGVmMzM0R0NSTVJYRk9DUXVvZ3V1MWZPRHI3dzRFdz09
Point here being that the 60 yo already has strong neural connections, and it's harder to reconfigure them. Harder than it is to configure the neural connections of a baby, which was not yet trained.
r/neuralnetworks
comment
r/neuralnetworks
2025-03-06
Z0FBQUFBQm9DbVhRWnA5OW1pNG5RVUxnWWQ5SWdidWZfODJ0Ti01Uy1mN1FfbzZHdF9jVDh3ZGRKcElxNzl4bTRLTTQ5cmNvQ1pYbjUwR0UzUGZwVkx4bkpYbUxSWVRsc1E9PQ==
Z0FBQUFBQm9DbVhRSm81UjhnSGRpNk16UDNTeUJoVy0yLXNzcmVzek1wc1NfT1ZvOC1WNG5WQ0VCdDRKRzJaQUR3bUFYQktRT2VhR2FFZEhTLXV3WUItRWRrSGoxV1BYeGZOdEM0QVpvMGptZ2hjRzB0Qy1ZSThWSmRmUUt0Qjd2NXl4Vzl6aDRGaVRzOFBqLUV0aU01NUFqWEJHWXhfTHpja05zVUUxSjJ1NGdZYUFSM2g1T3VjbUFYSHFJZ0RfTHVCTHRsbmxCVXp1elB3QWJmV0tid0pSMmJXemhBdGx2UT09
Maybe $7 if they play their cards right
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRcDBPZlJvTk0tdExidkVQTWlCWVg0NFJoMFVQclI5ZGxueEVMZ3p6QUpEOUVlTXBhRkZKMjRKNnJGYllYU0M3a3AyVG5VQkNybC1qZF9TS1ZzMHhPVWc9PQ==
Z0FBQUFBQm9DbVhRajZfWVZTOUFsTnBNNTIySEIwVko2QVJERHRBZnFLQUxCaHpDcHNENUdqMnlZQ2dDSWdhQjFnRXVqUzkzbloxR0p5cnl1YjdRS29zOTRWR1lKWkR2MXFqQlFvbmNva2psR09WZXdsNzFlQ01lMlpZV0VwQkZxSVFFSUJyOVBEa3o1UXVrMUs0ZWl3Y0dORGFfbjBTTENPR0dWUGpPWHg2MnVRLXJRaWNtWlkzT1ItWGN6X0NBM0wxVzQ1NjMtZVVzTld2a1BMU2xJa1dwUDc5SmRERDdyUT09
Can filecoin hit $4?
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRWjYtcW56aExwYWpKaXhKcmg0WENoNjNiUUFVZnllTzBGdnBLdEtQM0hyQkJneU1TREYtMVNEVHR2dUFiYW1XVHdPVXNwQ2cxenVfTDJNcFJfSE1JRFE9PQ==
Z0FBQUFBQm9DbVhRbk91dExUOXlCcFk4QTJHVlZ6TWdHNkNoSzU4Z2V3a2x3NGtLUzRtRmNGOUEtcV9mMjZIamw2YWwzaHZLNlpOOWQyT0JKdlVlZkU5RWNteGpnYi1MSlJUTVZUZW5XRnc0bm9mNHRMS3RqX2lIVUxFdndsYnJudzZhUzNUa1pWdGdvNjZlUmpzMHhYa2tVNE5jOHM3VEViNGVveEtOZlY0QU9mMVRReWRycXc1MkFqSF9naTBZSGRkRDFzSFE3VEhmM0pydHN2MTVvSENhUi1tV2lmMC1LUT09
Extracting structured data from PDFs, especially complex tables, is a tough challenge. We compared olmOCR, an open-source, budget-friendly tool released by Allen AI last week, with Gemini 2.0 Flash, Google’s AI-powered model, to assess their performance on tricky document layouts. olmOCR is cost-effective but struggles with table accuracy, while Gemini 2.0 delivers near-perfect extraction at a higher price. For a detailed breakdown of their performance on real-world PDFs, see:[ olmOCR vs. Gemini 2.0 Flash: A Comparison for PDF OCR](https://medium.com/@ali.sheikh_64228/olmocr-vs-gemini-2-0-flash-a-comparison-for-pdf-ocr-37fd5ed8bb37). Would love to hear what OCR tools have worked best for you.
r/legaltech
post
r/legaltech
2025-03-06
Z0FBQUFBQm9DbVhROENicGJDblRJVkE5YTExLWhIR1ZPNVFxenRacUZ1RGMxWk1xQ25nYlhDSzJneHoyN2ZBZmhWcHlTYmowWlJtYUlmT1JJeC1FVDRTV3ROVWFNb3Q1R0E9PQ==
Z0FBQUFBQm9DbVhRbno0M2k1dGJsSFhSNTZGem4zU21FWmJ2OXY1ZHVpSmNyUk5halAySWVXaUR0WXVqVHhueVB5cWdxQTZ6dUptNnJGbDBMODFGbE9QUGZuV2Q3aFlkSF91OFpLb09oQ1NVbjZob3VMLTdaOTExYjl0NWdBdjBaaFhSVHVOUV90VnUxVUY5emVZRE94N1pMWUg1dFVJMXBJUWIwUlNtRW04aXVMaTFTZlQzZEozTXREeHplVkF1b3d6TmJTeHRNdldxaWxLVkI1UlhPbW9nN0VnMFhTTkVDdz09
Yes surely it is , the whole projects is based on AI
r/airesearch
comment
r/airesearch
2025-03-06
Z0FBQUFBQm9DbVhRcDA4UFZNazhTOTJWbFcxQm5IMEZaZXl4aXRmMzN5dnJhbk9iQ0xvbWkteDdrMWtHN1ZvZHZjbVV0RkVrV0xqMTNVTi0wOVBJVXdFb3YtVlEtWEo2dGRyRlJBSmJ1QTMwVTJXNVZ4VGNsYU09
Z0FBQUFBQm9DbVhRVVpPdW9wTXVHUDJ5aDVYVlItRG5SbTVvVU9oOXRfek82MXZBYkFwWjRVcEgweUc4NFcyNUFXT21ZZ2ZBTVp3V0FZOGtsbFZyWWswUi04cm9hZWdRV1hraV9IaU0xUFZzYUhMWWNMcDJNbWVQaU9YOVY4Q2RFSEtJSEt5QmNSc0p2WDNFRS1BZjBicFhId3Z4dUx3VVBZTVZxWEdSVkNRTWI1QzlKS003Z2x5Z2pFWjJLZkdUYkZRay1zVXlaZkNkYVAxVzNTS2ZTRFJZZldZSGVWZFhEUT09
Well you confirmed i was right so i guess i owe you a real response. The main challenge i think youll face is known historical facts about the ancient society combined with the facts about the people in that society: what they valued, what was funny to them, what they ate, etc. To make this work you need to start at the individual level and build up to the society level before the model can make an intelligible reconstruction of who they were and what they would do. I did a personal project to see if i could talk to fictional characters conversationally; which is easier than it sounds because everything is know about them as they are fictional. I provided all known information about them and then had AI “create” them in my test space. Youll have to do that, but to much larger extent
r/airesearch
comment
r/airesearch
2025-03-06
Z0FBQUFBQm9DbVhRaFpmUDY5bkdIdkdpcHRtOEdUMXBhT0d6RVRBVy1pX19RWDRHSGM5RFMycU56RW1NWEJoUEdXZEJ6cG5wM0JvM2wwblVfLS1PUGEyUGtBMmNCcVphT0E9PQ==
Z0FBQUFBQm9DbVhRNGtGVWVTNkZRZ0tnTE9mVWdEZGNwdFh0VXpkZGdneGgwSGF6eHNpZVhQbFNMdTg3QzlrcGR5WGlzVjBvX05ZbnRiZHd3bS1ZMWtiT3hKR3kzWXdtbVVsNHNKS1N6Szl3RXZBRXk1ZlZTNmYtSE93c1FiQnRjcklXaW5Fc0J1Qjk3ODhIS256SlpHM2pHbzkwYnYtZHFOSEx2UldZZm5DaURWTl9LLVZYS2FkRWhsZEJ3OVVKUWYxUHh3UFBaQXh2ZEFQak9TbHQ2dS0yelpEN0pYTFIwZz09
Well ... There are always strings attached, when Microsoft gives you a product for free.
r/filecoin
comment
r/filecoin
2025-03-06
Z0FBQUFBQm9DbVhRMGlUWjBsYU1PVzRJbGdCZTNfcjBTREZwTzdNODFra0dfRnpPV0JHNkVxM19wdlBubmF4cUhvWFV3UlZIX0M2X3NkV25wTkhGSnVwNHpNOWEtQ19SU0E9PQ==
Z0FBQUFBQm9DbVhRaXNEOGU4RVpfTEx1dUNMYWNDY3dQSjhxa3d1QUJ2Zlh6R0RkVEttLTYzZWVUcWJ5YUZMU3BtbnRUY3hvR1BmcXFtVXhDcFFuOEgxVmM5Wl9Xb3hWb0U3YWRWTlJqaUJKdEVUVF9ia2lQQUxjN3A2ZGlfb19yMElRalI1T1I0R1lQMkZDOUgzQU0taTMtOUVFODhNMzNhWUt3UmQ0R2YzbTRnYTJLZ2tUd2g3eVRYeGVpNHpudXFmN2UyczlZZTRQMk1tdHFYdlhGUm9XS21tUzY0TXVWQT09
What are people using to double-check case citations in AI generated work?
r/legaltech
post
r/legaltech
2025-03-06
Z0FBQUFBQm9DbVhRYjFWUFM5VkNSdm13UDNsSk5DYldtT1R1S3VvVU5IOWFadVdqdE9SMllET2MzZ0JuMHY1cDMwdmFtN2RuZzIyRExWcHhrVGplUjJ0cEpJTDJKdlVyUEE9PQ==
Z0FBQUFBQm9DbVhRb0hjc1I5Z1JPSGstU3lvZlpnVEw1X054eC1tdDBJQkNxWGF1aVRiVU5kVGt6UEhsQmpJeG5FdjFKVWI5UHY3cGRkWUkwRTRjOVQ4cDBlTThaalRiOUx6STFRN1VXbnh6U3JnSnI5b0p5U3JES3d5Y2ZQUkFtNTFzY2FEaTh0M3pfVFl5S1NBeXkyVXktVWc0TFlkWUw5RDBQUFRFS3JMUWZnZ1A3MGtKQW5ud2dBWVRIam4tTFZxcmIxcTJ5Vm9B
# LINGOLY-TOO: Using Obfuscated Linguistics to Separate Memorization from Reasoning I've been looking at a new evaluation method that tackles one of our field's persistent problems: how do we know if language models are actually reasoning or just regurgitating memorized patterns? The authors created a clever benchmark called LingOly-TOO that combines linguistic puzzle templates with "orthographic obfuscation" - essentially changing how words are spelled while preserving their linguistic structures. This lets them measure how well models generalize linguistic reasoning versus just pattern matching. ## Key technical points: * **Linguistic templatization**: Created systematically varied puzzles across phonological, morphological, syntactic, semantic, and pragmatic categories * **Orthographic obfuscation**: Modified spelling patterns while preserving underlying structures * **Measurement metrics**: Quantified "obfuscation gap" (performance drop between normal and obfuscated versions) * **Model testing**: Evaluated GPT-3.5, GPT-4, Claude 2, Llama-2, and Mistral in zero-shot, one-shot, and few-shot settings * **Results**: Found substantial performance drops (15-25%) when models faced obfuscated versions of otherwise familiar puzzle structures * **Few-shot improvements**: Providing examples helped but didn't close the reasoning gap * **Best performer**: GPT-4 showed strongest capabilities but still demonstrated significant limitations ## Results breakdown: * Morphological and phonological puzzles showed the largest obfuscation gaps * Models generally performed best on syntactic puzzles * Chain-of-thought prompting helped somewhat but couldn't eliminate performance gaps * The benchmark revealed that current models excel at pattern matching but struggle with abstract reasoning I think this approach gets at a fundamental question we should be asking about all our models: are they truly understanding language or just exploiting statistical patterns? For practical applications, this distinction matters tremendously. If models are primarily pattern-matching, they're likely to fail in novel scenarios where the patterns differ but the underlying reasoning should transfer. I think this also suggests we need to be more careful about how we interpret benchmark results. A model might score well on a language reasoning task simply because it's seen similar patterns before, not because it has developed general reasoning capabilities. For model development, this points to potential training improvements - perhaps deliberately varying surface forms while maintaining underlying structures could help develop more robust reasoning abilities. **TLDR**: LingOly-TOO is a new benchmark that separates memorization from reasoning by testing language models on both normal and deliberately misspelled versions of linguistic puzzles. Results show current models rely heavily on memorization, with performance drops of 15-25% when surface patterns change but underlying reasoning remains the same. [Full summary is here](https://aimodels.fyi/papers/arxiv/lingoly-too-disentangling-memorisation-from-reasoning-linguistic). Paper [here](https://arxiv.org/abs/2503.02972).
r/neuralnetworks
post
r/neuralnetworks
2025-03-07
Z0FBQUFBQm9DbVhRU3k0VTlNclVCQ251RElZTnlwMmx6VHRlSXFlYloyNk5uV0FsNWt4RE9Idll6aXZxcmNBbDhZYlN0RWpVcFNhRTJQTG1xMXBKR2pxT003Uk5EMGFHSW5KUG4tVWItUnV3dURGNTVYMEozaFE9
Z0FBQUFBQm9DbVhRU3BzVmVKMDNpRGU3LWpEd3ZZeVRPaEM4RDJJZndvd3lMQ1E3bmdWb0lna0hBVnFqU2ViMmpLelphRHRYUDhScldyaW5hU2hJc2JJMU1kMmpvR0ZYQUpNTldKcWUxd1oteVpDV0NZNkJzYVNSLWc2MzVKZjVtek1MT0dsYmlGcERIU2FLQWd0OXdra2p4ZVdMYjNrVmNocmJFZ3ZxbXMwV2lYOTVERXZNOWZHT0laNTBsc2VaUERvajVRaWJMMkVRRUdIZ0tjT3Q3ODhiVnJfZTdrLUN4QT09
Hey, it depends on your needs! OpenAI is great, and Grok is another excellent option. Perhaps it's best to start with two and experiment in your work, you may like both, or neither. The key is to try them out yourself and choose the one that fits you best. Answers here might work for some, but they may not be exactly what you're looking for. For example, I'm using multiple AI models to create summaries on [cyberintel.info](https://cyberintel.info). Check out the "research" mode, which summarizes research papers ( which might be what you are looking for ).
r/airesearch
comment
r/airesearch
2025-03-07
Z0FBQUFBQm9DbVhRclhwVUZnWm9NeVM0UFlaa0V5VkRXQ1FIdDliUEdkWTNULVFUYkE4VUFMOF9Wa3k4b3FPMUZ5R2tPMnVWcWJNT2w3UVdLTTFPV1JuTk1pWG1rcFhMZ05DM1haS3JRY3lIanNFT0IwMWtYZGc9
Z0FBQUFBQm9DbVhRbzJLZVkzTzdQQ0EzQ3ViOGtvaGUtWFNKdHJmN2FXdzR3V3hoQzNWZDZ0cXpTWXd3LWVna2l5VFRaLWE5V1lUazBTZW55Y01hTjZOVUJWZjZkeElrNzBwQVRNeWszRTJ2akRJVFNJaE1mQnBkWXQ1SmFvcGxqcFNGeFpXc2VUbG9YZzhqRU1jRWl2aGtIVDhJM2I5T09nS2RLZ0tkTWNYdDYzdTkweVk4M1JXVlBmVjM1cU1iTnpaSU5obVlGaTRS
Hi, I want to build an AI tool that extracts data from my contract documents, such as prices and dates. Also, I'd like to check for whether or not the documents have been signed. I'm currently using Vertex AI for this, but wondering how best to architect this to achieve optimal results. Questions are: 1. Can I train the OCR part of Vertex AI to make sure it's recognizing text properly? 2. Is it best to use a separate service for OCR, then feed the extracted text to Vertex AI for data extraction? 3. How good is Vertex AI at identifying whether or not a document has been signed? 4. Are there alternatives that would be better at all of this?
r/legaltech
post
r/legaltech
2025-03-07
Z0FBQUFBQm9DbVhRejc5Q0Q2enllSEpsQ0xDanhhdk9rUGJvNnVZWTlfdXlzNGFKOEIyVU1wUnIyYjRCa3U0Rk9hVXNLUW5ZVnVNNFRuY21yQjVmRXgwU2xXYzZVN196R3NoT2piQWhQRnZvZ2VRMUlfRG9BM0k9
Z0FBQUFBQm9DbVhRVzhXSlY4d0hVSEstVHZNNmhUa25aRnowWVNia18xQTRCRE56U3pBbEVXaGdSamg5cnI2b0wzRmZFb3FTd3hqTmR2X0M3QkRYY2Y2UVpPVVU1WnVvOE9PNVNWWEJuUkZ3U2xWSG5lSW5hNWFxS3VLN2hWRmVleVh5R1ZwanF3ZUVmYjBmQW1HVk9JNVFqckVCWklQZjFxRTlERnN3dG1RYmNLVFNnUU42VlVjd3BfRjR1UzluU3NtNVU3VlFrWlk2
Gregor Townsend vowed that Scotland would stick to their attacking guns as they strive to salvage something from the Six Nations campaign. The Scots have faced calls to introduce more pragmatism to their game in the wake of an agonising 16-15 defeat by England two weeks ago. Townsend’s men outscored their Twickenham hosts three tries to one, but passed up several other opportunities, failing to come away with any points from their ten other entries into the English 22. Their head coach insisted that there was no need to rethink strategy prior to Saturday’s Murrayfield clash with Wales, even as Scotland are staring at the prospect of a sixth bottom-half finish in the eight Six Nations campaigns that Townsend has overseen.
r/sixnations
comment
r/sixnations
2025-03-07
Z0FBQUFBQm9DbVhRM0x0N0JZNkxCdDNHZTRhVFFfZkE4RnRRblRXSDVRVkJsTEVmd0c5X1Q1c1VHV29iUDdYaG5fclJ6RDF5VHctb3llTV94TGlyZTF2NlBLRUFuT1RlUWc4WGNJR2F0bjBJTW45UDNQQTRaVlE9
Z0FBQUFBQm9DbVhRclhqMGpwTkNncjdla3ZteVN4SjUtS0Jianp0WHhwbkFzalE1akZXcllJZTg4YlM4cnAwSGU5aWdHR3dCUkhuUWFiUXJrZWN3YUtxSFhzbUQ4cXBYR1dTSERHN21XemRIYmhIX1FYQXRsdzhMTXVKM0Y1QVBuOUVnaW5JaS0wSTNUdzV2UFJpQkxkallrYmUtUHRqWGxWejdYX3B3MWRKaG42dlVUUjQwUWpWaGRPZ1I0emg3WW8xQldOQmIxbGs3ZkJzNGJKbmY4RUlGYVItRWRBdmotQT09
How are you measuring whether people are effectively using the legal tech stack they've been trained on? Have you found ways to identify which tools need additional focused training?
r/legaltech
post
r/legaltech
2025-03-07
Z0FBQUFBQm9DbVhRdDdsYVh0MmUycHoyZm05X3RTWVlQM1JoUGd4RzZ4SV9oY1hDTE1FNXd5RGN1TG9wSGlMWUxldWVEY2JJeW9RbktPaEd4MUt2NzJwOFlBYXZpWmthcWc9PQ==
Z0FBQUFBQm9DbVhRdnI2UE1oVmVDWDBCTjFQLWp2ZVliRW9vaTlKUGxQZ0duMEZZQWJSV2JyLU5pYmV5TndkRnRSXzVDQlNrMDA5NlpIWXBfUHVuWld6YU93UFlZY0ZPTEQ3VGEzWVVINjFfWl9GbmdvV1Y3U1JwYVRuRUVTOF9tamVmdkQtZVlpNlJhVmZ6RU5UcWhzbHZoY0lfay1jSTZ5WW8xN3RxWG9kaXVyNXhHakM3eGt5V3daUUY0N3YwZFY3b1JvMDdCRy1hSUJsSTZTNVByNGFyS0xZOW16cXlsUT09
Cool what happens when it works
r/filecoin
comment
r/filecoin
2025-03-07
Z0FBQUFBQm9DbVhRUjVZTTFCdnhDalNVN0pSNjVLaHRncVpCWUU5SjdIcVNxSGw5Y2FxQnRRb3R5U0l5Yk11TUtMdTRDMFNYMmk1SVpvN3BQTE53ejNqaWY3X1ZvVlNHSjFmdkFDTzBwTmwwZXJiWU10TkkzTk09
Z0FBQUFBQm9DbVhRV0xfd0Fyak4zYWIyb0pVTHRuWVVRR0Vma1Q3U3JUTGc3N3dwLXVvcmpFX1dqdl9tVDlpZUQ4SlVwZ3UxX2ZraEt6LTNvUExfNU1jQURaSE9YWUl0a3lXclR0ZS1fQlNEY0h6akxnWm1GV3dNVHJvS0UzYXdVUUVIVmxHdHRHa1V5UWdFMXNZUXpVNExFeTNua2hEMVc1aU5GTFd2WHE4YjlrMnBZMTRSeTJ2YVZudjVxQmNTWk9SNjVCeGg4MndCUWhscVUteEJmeHU3YmVNdnJzeU5QQT09
Hi everyone, I'm looking for software documentation of an open-source project to support my thesis research. Ideally, it should be consolidated into a single document (maximum 100 pages), covering small enterprise applications or legacy systems. Most documentation I've found is scattered across multiple files or resources, making it challenging to analyze effectively. The documentation should ideally include: * An overview describing the system's purpose and functionality. * A breakdown of internal and external components, including their interactions and dependencies. * Information on integrations with third-party APIs or services. * Details about system behavior and specific functionalities. If anyone can recommend a project with clear, well-organized, centralized documentation meeting these criteria, I'd greatly appreciate it! Thanks in advance!
r/softwareengineering
post
r/SoftwareEngineering
2025-03-07
Z0FBQUFBQm9DbVhRWGd4QUVsOEp0VzZSanp2amhqRzlhVTl3RFJNbklqZ0Q5NlVRVUZjcHZaS2d1cFBmRERENDVoX2pXUXRLTE9tTGNSdEFqMXlzM05KSUl1c2MyRlByMVUxcG85Z01QY0RmcXVSUHVONER3Rjg9
Z0FBQUFBQm9DbVhRSDJyMDRoSkdoUDNSVzZ0VHFRendjN09zdHRhMWRCQ3BZY3htaHpaZmlqX1otWENsLUhYX3BsR1djc2Z3dGJFQmlxZjJtckYzNnF6TGRSaDFJRGtzTXZta0dhZGlBQ1ZOQTNDOV8wTmFiLUxSOFRhOURjWTctX3ZwRV94Uy1meHVoUHViLUFYSjJmeDRVSDJvTUJjR0tWTmU5WllUc2RLcElBTklJYzZvUERRYktpQm1JSnl2eDFRcVlGa0FweTVk
cse student dev here for a hackathon project me and my friend are making a decentralized digi vault it will work on Ethereum and after linking your wallet through meta mask an nft id will be generated which is secure and forgery proof all your details are stored on arweave you can store your hash in a pendrive and this can be used to access a digi vault which will store all your passwords and files only you can access the vault would you use this and what would be the downsides any suggestions are appreciated i know people comment less on reddit but if you would use this please comment Also please tell me if the flair is wrong
r/ethereumclassic
post
r/EthereumClassic
2025-03-07
Z0FBQUFBQm9DbVhRcDJzenlVUXlpb2JNbzBfc1E0WkFYa0lOcHVoTzVaSEJ6RHZlWk5FeFp4ZFhVOGZVUXhPTW5sWlRJUlVaUVlsV2RLT19XLXdPTk80VjBNdHNTUW1WLWNZM0FRYjVrQ1o4QnJVUDRJenNZMTA9
Z0FBQUFBQm9DbVhRdHVZYzU2emZvVGJQdnRHOFdLeS1LR3p4Z2dnMG93WGhQMkRKQl85UG5uakRxQnZNT2JVOUhoOFdqV2VTRlQwU0xROVlIR2M4a0s2ZXM5S3ZJMldLT19iWTlkaXdYaE5pSXlqbzUxeUFiM0llTDQ3SnVVNnRvXzRNaWxWVldRQUhrNEpLTUVhcVA2VEZFNi1PZXR1X2dvNjNrM3k4QTZwQnh4UnR5am9BUHcySEFBQzVWMWZqN205TkdBWTFqS3M2RW9NaGFHV1A4R2tjclB1aHlRcHhCdz09
I'm new and just trying to get into NRL. I saw on their website that they have live games but I'm not sure if you have to pay for it or not. If they have full replays for free that would actually be better so I can just watch it whenever and don't have to worry about seeing it live. Any help would be great.
r/rugbyleague
post
r/rugbyleague
2025-03-07
Z0FBQUFBQm9DbVhRcm5rQkpHeVZ4VU44by1GbnZmRmZ5bnMtemF0TDZmdlc5NGIwMlZaODY3Y1NNb0stR3VLaEtNeTBEaGl2NVVFVTZ5MFJxdGRhY2pJU3FrVDBPN25TOUE9PQ==
Z0FBQUFBQm9DbVhRNndFdkRCOHdldm4yWFA3YjZDbGlfcnh6ZVhNOUFqejJWZXo2NEg4LVJ6S0lfWkF3dmRwZUI4MkdBenpYcmxGUERjcDR6Z2N6RWVyX2ZPdGxGVnQ5SEhnY1dNNWZqM3A5alFXRVJYa1JucC1Ca0lDVTlIUEhjM3kwbjRrdkhLUG1HNUVrcm5IbDkyeVFsaDZJVm01Yk12bk5ydFA1NUQ2R2x2MEgyb09yOTB4NkJIUVg3OTdVWVFBVHNiVU9iUk9VRHJSUkFDZDczcXkwMnNlb014ZDZnUT09
$ 50 easy, 100 maybe.
r/filecoin
comment
r/filecoin
2025-03-07
Z0FBQUFBQm9DbVhRNmdzOTVjenBGZlJZWUVudXVWTHhXVkZFOFBES1ZzeVBZMk1qdDNjVnBrMGoxRmRnQnBYRmNiT1Buby1GcDhJRExNUU85dWJEWDN0MEo4UXlaZGJrQmc9PQ==
Z0FBQUFBQm9DbVhROWxNNHBGMlIzeFB6T0RTemJ3a0wtWHk3Tkg4a1QzaW1NY3BKUXVSVjl0bEpKN0g4RGR0NnJEa2JMQ1MwYzZPVlVyanpYUUh2OGtScXd6bEl4LTNvSjc5R19Md085eGh1RFFYaVNmUTNsWGJuVXFpYlo5OHEwY1hsSF9TMFhFTmZXUE5iVXBTYlBvZGs4b3hnbmlBbkczSVk3dDBEbWpVY0RQb191c3pQZE9zQjVaUFlldDNWS1hPMHNERXQ1cGUzcjA1QTRWOEFBLVR2cnQ1TzUydTF1Zz09
$2 must come first
r/filecoin
comment
r/filecoin
2025-03-07
Z0FBQUFBQm9DbVhRbFJ5N1U2R08yTmVFbGxuN2EyNlhmQkxfTHJXcXRpamJPSmhwVnNfeTI3bUdMcUxyMnJSY3lZS0VBU05XZDg3aVVsQ0UxOVhkYUtIOGk3aVROSzIxRGYzOWFUdjlteDZIdDYyN2NQYm1CVWs9
Z0FBQUFBQm9DbVhRcGFlWFRFb0NKaXAzU2xlQ04zand3a1FjdnhXM2tzcU1fc1pTZS02MGpsQWdGYmZySUlyYTkxNklCUW5oWS1rajdkQnNHN09IUjE0OC1JNTFXN0V1R1hLUExpbVlFaWFYT05pU3F5bjRNYjBvaTFuaVQ1aXp6UEoxS2Z6MHpfX1l6WE83RFBSelJxLVZORTBDNVRZT1VFN3V3TWJSMjRvaWZUejIzMExyZ3paUWZhcTJsVy1ubjlqYVBMUVBHM1pqVTFCR2FDdlpjT2I5bnZ2SUk3aXZxZz09
I recently completed a fantastic YouTube playlist on CNN models by *Code by Aarohi* (https://youtube.com/playlist?list=PLv8Cp2NvcY8DpVcsmOT71kymgMmcr59Mf&si=fUnPYB5k1D6OMrES), and I have to say—it was a great learning experience! She explains everything really well, covering both theory and implementation in a way that's easy to follow. There are definitely other great resources out there, but this one popped up on my screen, and I gave it a shot—totally worth it. If you're looking to solidify your understanding of CNN models, I’d highly recommend checking it out. Has anyone else here used this playlist or found other great resources for learning CNN architectures? Would love to hear your recommendations! From what I’ve learned, the playlist covers architectures like LeNet, AlexNet, VGG, GoogLeNet, and ResNet, which have all played a major role in advancing computer vision. But I know there are other models that have brought significant improvements. Are there any other CNN architectures I might have missed that are worth exploring? Looking forward to your suggestions!
r/neuralnetworks
post
r/neuralnetworks
2025-03-08
Z0FBQUFBQm9DbVhRYXNSU0hKZFRaS1NrMm9xbXFGRzV3aUdKSjlyMEtkamtHbGhzX3ZtSVdBMk5Zcm5PMHJ3ekNHTjRYTFV3LWxGbFFtTnk4Ymh2ZDE4X1I0WkNvRFdUQUdZNmtUUFZNZEl0dmNvdE9JSHR0TEU9
Z0FBQUFBQm9DbVhRWHFSeW95UENFMlNiRWRYT1c2c1B6ZTBQX2ZWT1NPekNreHZsWGNDZ2NNVDd0enRGVHZkZjd6NU54VkNSclRlMGtlbE45dTBWTXZDZFBoazBmZTh5MlFMV2k4Sk42RTJ3eTI3cWVzWGZVUkxfRmRDT2NydDFhWWVBY2ExVXdFX3FfY0lQOHVNRVRtb3RKQlp1a0Q1SWotNmlxMHFQMFdvTjVXLU02eGFIbUpUTjRGbEExLUZ6N3JnOUNuVk9zTHh4azJEUi11MWM5VDFULTNjc1dpMjNOQT09
I've been digging into this new PokéChamp paper that combines LLMs with minimax search to create an expert-level Pokémon battle agent. The key innovation is using LLMs as state evaluators within a minimax framework rather than directly asking them to choose actions. The technique works remarkably well: * Achieves 90%+ win rates against top human players on Pokémon Showdown in certain formats * Outperforms previous SOTA Pokémon battle agents (both supervised and RL approaches) * Demonstrates expert-level performance with just 1-2 turns of lookahead * Shows sophisticated strategic thinking like proper risk assessment and planning * Reaches the 99.7th percentile in VGC format and 95th percentile in OU format * Works with different LLMs (GPT-4, Claude) as the evaluation backend * Handles the immense complexity of Pokémon battles (1000+ moves, 250+ Pokémon, numerous abilities) I think this approach solves a fundamental limitation of using LLMs directly for sequential decision-making. By implementing minimax search, the system explicitly considers opponent counterplays rather than just optimizing for the current turn. This could be applied to many other strategic domains where LLMs struggle with lookahead planning. I think what's particularly notable is that this success comes in an environment far more complex than chess or Go, with partial information and a massive state space. The computational requirements are significant, but the results demonstrate that proper search techniques can transform LLMs into expert game-playing agents without domain-specific training. TLDR: Researchers combined LLMs with minimax search to create an expert-level Pokémon battle agent that beats top human players and previous AI systems, showing that LLMs can excel at complex strategic games when equipped with appropriate search techniques. [Full summary is here](https://aimodels.fyi/papers/arxiv/pokechamp-expert-level-minimax-language-agent). Paper [here](https://arxiv.org/abs/2503.04094).
r/neuralnetworks
post
r/neuralnetworks
2025-03-08
Z0FBQUFBQm9DbVhReVRJWUt4NzlVMmNpemJmMkx2VDNEVEhhOUdCOWhtdW5CQl84V2N2Z0VoVTJ1UFBiWVNPZU9LdnVtUXRyUlF6ZXN5dTgzWjZLWEJiZnBveWZiV1V5Ml8tUEJIbW5RbXRNeXNaMjNtcjRsQ1E9
Z0FBQUFBQm9DbVhRbUhoby1jUHlINGJvUi1PS3ZUWTFtNHhqdmZISTJ5OWlKNFg5eXlmU3NUNXFEeW1IYXZfaUFkeHNRMDZiWElqVWR2NEtfZnJaSlNKUEFVZG9UQkVSYndwcExtSFNKWllFZHpwUjJ4T1dvbmVZVDNLYzd1cjRQTUJtaUFUUmhPb0JmN1ZlZTBialZLanQ4YkgycGY2dnZRdFRLMWtIeWJ0bnNEMDNJbGMyX0JmN3Z6bC1oak1oVElvTXotcF9ORlI2NUo1akpSUWFyd2pvZklMMU1Hc3ZiQT09
Jack Crowley miles ahead of Sam. Sam never should’ve been the first option
r/sixnations
post
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRWXl1WkZpUTN6aEl1Tl80TnUtNW9Ja29waUIxZUdrX2VfWE5pMzNBUHR0ZkNmT0tMSHRubU1yU252Y2Fpb2ljWjk0U2gwLTJUVktyY2FJNU5OcUlTSkE9PQ==
Z0FBQUFBQm9DbVhRSWRHZUhWeE5fUUQ4Tl9vN0VWeW02SERkQ0MxcVdPcm42aC12ZHZGWDRhRE13OU1rZ2dQNTZidElTZlVybFdqMmQ2YzdSVEFWOERYYVRCdDh1UFhQYWFCSzRpbXg1cUwyeGNtWjg3Rlo3YW5fbFU0NmtGZUJXd0UtUTktRkdudnNYN0RXMlk5QTYxZFRJbGY5dVNVN2tFQTh5ZjhuOGtCT25NYXhORWR3UndZPQ==
God awful compared to Crowley
r/sixnations
comment
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRWWR4REtRYlVWci1jVmFNSUdjM3pZR1pwbVJoN1Fyak9KQ1pYaUlKVXhNOUtUOGhsZUlZN2xzYXM2Nlc4Q0VrWHhpOU9DaGtXSGpsODBnb1dLN0RhMFE9PQ==
Z0FBQUFBQm9DbVhRV0FYQXB1M1BZVEVxeW83SkNEWGVYRFl3cjBCSmRvZEJ3RTgxYkZGZVgzM1pTOXF2X0RQckdKN1A4QTJlR0dlcENrblNnRTM5ejQ4TDNwMEhXeHdrdlZJSURPQlEtV0w5RWJvNXRIR19ZYU9KSjVuQS1QdGYtdUttSF95N19DZGZJdWVkUzhkSHFfVVlLLXpYaGIwc1ZENVc0SzdSQ0dVNm94LVkzWTR1YU5tWHlWY2dERHlILXl0b3RmMWNxRk9M
After the interception and try he just gave away a second ago no doubt don’t know why it was ever a debate
r/sixnations
comment
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRQ24yWkxtc3dKRXBXMVdFdE5KTGVZZmRaVUdySlM5NVdYXzhHeWw1RkJ4MEFpT01KVjZXTTg5MTNQTnlsOTVyLThaODdfVThjR1hOTnozR3VTWTMxLWc9PQ==
Z0FBQUFBQm9DbVhRX3BzbUg1LVd1bWZMaTcwNlQ4T0ZNd1Y5U2FuRmFDNlp5TGl2dDVsWjc2Z2tnQWR0dm9QRzc0THUtRUVTN28wcUt3aG1jTXdtQW1vRnVrX1JHVnpPTlFLckdBMGt3M2Job1lkSjdrbFF6WUtYNGdEa2R3QUg0UGtocVJnaUgzWGJ2XzlWMDhYS3I5SE11NDdGMGJkeWc3a1JTMWRMRUdPUmRDTm9MdHp2b3VYWTVTQ19xM2NsR1pOSEZrejVHcGRT
I think the little bit of of ocassional flare he has has brainwashed them into starting him over Crowley, his kicking is wasteful and he is a soft as baby shit in the physical game
r/sixnations
comment
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRSE01TUFNMjAxNVFFYUpZYVI2cllMU2duWFdKbE55Ym85cG9wS2RMcy1UVldXTjJWZENQWEd0MWF2RFpNX1hncDU4QzlhYnJZOGRZdjhyM0t3X1pUR0E9PQ==
Z0FBQUFBQm9DbVhRZjNCUHJUVVo2ZVAyVGk4RjVWTy0tOGZHdUxMMlJyalZaRGxpSU9jN2t0SUxrUXdWelhpdkJNYnNOcGpEUXRqcms1RlJZcWJaRlhhMkRQc2NpQmd4QThaejFVeVQ5ekpnNzBBU21RZy1zSUZ6d3RYRDA0UmpxcHVHOFByMVJfN3prQXhlaVlUeUptb01HdVRsWENmUlV3VVluT2FlM1UtVFhuQTJST2dGblV2bGxfcE1oY3EyZ2lUMXdRcWsxZy1V
Every mistake he makes is covered up by the excuse that he’s only 20 load of shite. Ireland trying to hard to find the next Johnny sexton by creating this atmosphere around prendergast who just isn’t at that level yet
r/sixnations
comment
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRT282TFdGd0RMaGNid2dXSnFjR0dOLWNOWUVpM1lHSXlVUk5NNG02Q3l5WFJoU1RUaDJGcVFKQzdnSFFRNmNQTEp6clZ4WDBxODR6X18zV0RPd0RWSHc9PQ==
Z0FBQUFBQm9DbVhRUXctQ043bkN1cTUxWWo4Um1kY3VxaWh4b0JYNm1FWXpUZXlUdjFoYkxLZy1zRDZBTjl2WW9SOXppcXYwUzlsV1hQRTlGVEhWMlNIakxkOFJyYW5RWnhEVTlxTVB5SHR5OTNTZENnajVEUFJ3SzJfbGFULVA3ZThhWENkVy1XU25MN3VaN2lWTHFRV1ZzMW1uaXhfMUp3MDZDVjV3UFl0S3BWN3JLcEZIRW5yT3Jwc2p3eWxUVEMwNklQVmMtcEd1
Sheltered by leinster
r/sixnations
comment
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhReDc2NnJxb3pWY2ExS3hNdG11Q250UzNKR1lxM1pTT2hNM1VjTk9xQ3BJT3BzbnYyaHZmd2k5MTBDNnFtZjluWnlrSFNzXzVNeVpNV0NEM0xfZHlzR1E9PQ==
Z0FBQUFBQm9DbVhROGxoTGVTWDUtY2kwYkUyQ0RQNFhTTEZpMjdvdmUwbWlMbWlNdC1ld3JyZnFGbEp1cGd3YWs0cHY2TERVNHV4WXRlMVFmSlE2Y0FCTktiSk11bm9CaFhTT0Q5VWpRbmtpQUIxel95QU9sa0s0VHlBQzk5bnJWNFhFNUdhS3NITTMwaEliMEN4QVYyNHB2MnhVelBoVElZTzBxRVRVOFFMaFQxNU4zOHJtTEZKNzZpNENXbmFSdjE1cW9LZVlpcEts
I blame Cian Decroton
r/sixnations
post
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRSGg5emptV2llMElKcnhXeWUwd0V5OUdUR2dsc2gyMmpBcENpckk0T3RlZzByQWVzVDRzaWV3V2xQT25MY0NYZjNPTTVadi0tcHFhWTZremowSHJQYnc9PQ==
Z0FBQUFBQm9DbVhROU9JLTk1TzZJamVkblgyYVdLb1VSelZka2VKbU5nLXEyZkx6bXdhSzV0eERCajVxRzhOTzdnS0JhbEdpYjF0TzdzM0ktanAzYXluWUlvdmw5RzdVLTNROTcyNHlBUG1nU0R0T2FyMFRLMC1VUTRFbDlrWW5BSkQ1aEJPU09tQlJjblctSXpVVmx3NUl6cFF1c1BzZlI3aEM0WElhdVVEdnZYcFdfLS1SRllvPQ==
No doubt, Sam will be a very good player but he has been pushed down our throats this 6N and brutally shown up today. He has lots to work on. Crowley has been treated poorly imo. Hopefully both can bounce back and kick on.
r/sixnations
comment
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRNUFLMHVKSDZKTU9WRUExRWZYdU1DV3ZaN1ItNWM0Sjk0OWd2WTRVcXJqWDhPVjRhY3YyblVNS254RmxuUWRkSjVaOEdFUFlSMU9SN0hnOWJiX1Z6ekpybDktdE41WnYySXRWSFoyVzBmSkU9
Z0FBQUFBQm9DbVhRb2o0YmRJdTlPazNSeHE2NDlPRzZVeHVGR00wY1NENEREcklaTHZZUU94OTM0TjdhOGtlWGhlejJLYld3WHNhN05XRVdRdXBNSjNmc0NvQS1xdzB4eDM2Mm1fYmJuRmpqbGNrNi05X2FBd0R6NENyQXNzQWUwaFRueEtVZEsxVERyTUdNOTVkdFVodGV2STJmSmZ3eHpBeG9LNzFvcGJLWm9lSkxSM3RVMWowaTB0VkJSMDVpeGJxam9VSHhPalhD
People seem to be acting like Crowley is some extraordinary player that is being hard done by. Truth is, he is absolutely bang average. He would have made absolutely no impact on that game today, France were just on fire. Prendergast is going to be a great player for Ireland, there is no doubt about it. Giving him this exposure now, when we are still so far out from the world cup is the way to go. If we just started Crowley every game, we learn nothing.
r/sixnations
comment
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRMXpmRWFqMUNVdC0wWHoxdnEtRVhRM2FqajN6ZzNQWVA2OEhxTVdzZUh5WjdFbGFfdVNoNktRNFlma196MktzXzl5MXU0ZjlveEdVbjBvRXZBbTR2LWc9PQ==
Z0FBQUFBQm9DbVhReUc3MzFTcExnTzZEM216eGxOZU15cUNNNTJ4V2hJa1V4c1R5MWhqbTN5b2lmVzZYVmJfcGJxaDZpQXJBbFNuU2UtOFlrSTlWWGlGcjVPT1NUUVJxTEJ0WXhYakNZUHNDNldfa1hGak1MZzVRUU5GTldJZkNQM1hEMlRrM090WGszR1JxWFR6Qk1iZUVQd1c3di1EczdnX05oTWpqbDh0RXA0RW1SYVU3bUtNTDhMSlNKczZ5Zl9DbkNpRTM4ZkIt
England used to have a confident, free flowing attack, and now... marler is gone, we nearly lost to Scotland, and Farrell is approaching his decline (if it hasn't happened already). Thoughts?
r/sixnations
post
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRTFcwcGZxdmt3aWl3U0ppVFY0Z3FKa0lwblZHU3ZoZzFpaDZxXzFXSlNqTjRvTGJieWtiWnd4RVktVTkzUXJGUE9BZTdGTlRpb21nRlYtQlVKX1Zoc3VadzZyTzJsU3lTYndSLWpxeDFla1k9
Z0FBQUFBQm9DbVhRNnRFUXoyOWNfcWVvZklXVnBqdVJuYnBMa3QzWkdBY0tmYVFFUjZtMlNQekFTdmJ3ekxFdUR3VE15eHhwNFRBclZMRnNRSXlISHJtQklQVDkxcDVheERNSTJUNmpnc1A4S21rZm83NEJXdHF1X1FubkN4XzJwcWtZR1oxZ3c5bFUzUkJhS1VkajJpSG9VdDdsZ0F6b3ZKSXFPeFNseUJVWnBsemFfU0VpQy1ZbVQ1Zk1qM21wSXRmalVOOFQ5dTlf
Ok here is the appropriate responsethat i got from AI about what i want to reply to you : **"That’s a really interesting way to look at it, and you’re absolutely right—the biggest challenge isn’t just reconstructing landscapes or buildings, but understanding how people actually lived, thought, and interacted in their time.** For *Shadows of the Land*, we’re approaching it from both angles: 🔹 **AI for visualization** – Generating realistic historical environments based on archaeological data. 🔹 **Human research for accuracy** – Historians and archaeologists fact-checking every aspect, from daily life to cultural values. I really like your point about starting from the **individual level** before scaling up to society. It makes me wonder—how much of daily life do you think we can reliably infer based on artifacts and written records, vs. how much is always going to be speculative? Would love to hear your thoughts!" But as I really value your response i want to add my reply with my own words : First of all i want really to thank you for your great response, you just remind me of a project i started with months ago which is about living a day in the life of a famous person from his eyes i even did two episodes for iit one for Mandila : [https://www.youtube.com/watch?v=ZvbIeKdasOc](https://www.youtube.com/watch?v=ZvbIeKdasOc) and the other for Ibn Sina : [https://www.youtube.com/watch?v=hrYES6lzX9E](https://www.youtube.com/watch?v=hrYES6lzX9E) i think your response just made me think what if i get back to this project and did something really better with the advancement in AI tools such as the deep research mode on chatgpt which will help a lot in this , the problem is that i always run from one project to another and never get back to the initial projects i was working on , maybe it is the need to secure a financial income which is something very difficult here in palestine especially with the opportunities to offer our services to outside is very hard ( i am currently trying to initiate a pioneer account and it is not working , paypal and other means of financial transfer isn't allowed for us ) and the local market is suffering hardly , so these things just increases the pressure on me which leads me to dumb any project that isn't generating an income for me , but your response reminded me that I left my job and came to this field for the purpose you just mentioned which is chasing my imagination , I really thank you for your response .
r/airesearch
comment
r/airesearch
2025-03-08
Z0FBQUFBQm9DbVhRc3VKcmRLcmVOS1N1T2VWb3l0a25WYmhzNE1RdFRfVnhEaHM3cWFpNl9nNjB1N2hmXzRIbjNYb041Q1UyTFNvdWtRcmNHaWV1aVRRWUpUTVk3V3FrZGpwSktfLV9KcFRFd2NVTEV1TnlNNWM9
Z0FBQUFBQm9DbVhRLXdkNnZsZlhuR1E5WDA0bGhocHpRS3hXODBTVWl0SWl2WFRpX0FFdnd6M0ozdFRGV2ducnY0dnRXMk1KRlBfM0JBZURtbDJQQjRpWnczSENaR09VbzV6NDdTX2RTWHdQWFVHSE9nU0xOYXJTRGRKMV9OSHNkS2QybmhBTVgyQm5BMXlGX0hXNTZCNkdpaG1Fc1N5Q1VvRS1SRHNkUWpmSWJ4NmZnRmwwS005UjBJbzI1NzZyX0FabFdEYTB0dm5wSlFSSEljclJyd0NKMWVkRmZ2NUlhdz09
Crowley has had many a great moment for Ireland. Prendergast is a great talent and I can’t wait to see what he becomes, but Crowley is a talented player calling him bang average isn’t fair.
r/sixnations
comment
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRQk8wbXZIWUg5TlAxRGNydmt0Ullsa1ZvZGkxdmtLamJxVEtVODV6RDBZMkJLdzlYRjRfakJpbVRlSV95YTB0VzQtLWQ1T0lqLWUtSlBnZ2hTMm14UnFja0M0VVctQkloUmRhMjVnZ0RYT009
Z0FBQUFBQm9DbVhRakdxdnhGUDRqcTczek0wWFRSQzBxX1JwVEp2MFhMUmtiX2NsLUxpZGFxU0dpVm1fTC1tRUJNd3Zua3FxdUppMC1VOWM3VjNrVk02N0prR3BGYmVTTm8wZDB2cUdLWHhsSk1nQVhadXBBVG9ETXNyRHdJMkNYZFdZdFFmOHF1VmFJaE84Znh2YUpRR1IzbkZESmhTdDJNQXo4QTY3OURmcEpZYldsc3JuSllfckdXOXBMYnRxWURzQkJIVlhkMGJ1
I started this 6N campaign a little against Sam but have grow n to appreciate what he brings to the table and I absolutely DO NOT hold him solely accountable for today like many other posts seem to be doing. Only person I'm pissed at right now is Easterby and support staff. We had all the skill and quality out there to win today and certainly put in a better performance but things went off the rails entirely and Easterby as coach did not find the right words or selections to keep things together. I don't get why Sam has been left only so long and JC has had so few minutes I mean even in Sexton and OGaras prime the backup 10s always got decent game time yet SP has been given sole responsibility it seems and JC hasn't been allowed to play 10. If JC doesn't start against Italy then I'm calling massive BS for the coaches
r/sixnations
comment
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRNmNUNXNaWkN3T1o2NDRWazNNOE9OSFN0WUVxd3h1SVI5bmJ1cEM5SlY1UnZGcERhcjl1LURuOV9OYU5mQjZXWlRZRHZHb3F6cVZXYzNmaWUtM1p6MVJFcV9JczhUNHpra3BIMFhEdXloSHM9
Z0FBQUFBQm9DbVhRSDJGRHBkdDYxcUZaRF9VZTl1eENYcE45WGFHaUpoT3FTUFliVkhXcHR2d2FnYWItSEpuc08xaHEzd0V2MWd3VmViVk5vSExRYV8xZXVaNDZSd1E3eTZqemtteGd5WGpXT2x6U09XNzlkM0lCeENzLUxzZHhFbk5HM3hCNUM0SHpVaWR4cmlUakdkbE5PMUhkcEJhbDRub3ZtdUJBNVpaRkJiT2FXcVVSMEdmamFIQ0dMV3lMR21EalZkVEtRbENG
England are #1 if they win tomorrow and they’ve beaten Scotland for the first time since 2017 right? I’d say this is a good year unless my info is wrong, I’m still learning the sport tbh.
r/sixnations
comment
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRMFlHcjVrMWlHcDhlcWdnaVhUVUFZSFhrSXQ3Y0FmdUZXWklIRWhsanZ0TGc0XzIwalNhbzhkcDRoQ3F4bkRWbVdhTjBaTnl1Ni1oZzUyc1h2S3Bvanc9PQ==
Z0FBQUFBQm9DbVhRWWFab2ptRFBscXVobVBxeFVxeVNRb0xSR3J4T01yM210MXJkbGFaVUpGS1FYcHYyZ0lNTlMyUnNmbWh3UXhMVXkzcW1kUGNvbFVjZWE2VlctZllrSF9WMWozTkZpVWlhcmdHZ2U3VDd5YkFTSjBxczZPeVp2eTAyX3FWOWZLUXdURmJQMnlpNmgwc1UtcFBkdzZxTnYxc2otMWRJdmNGblNOd3FJQTcxQjNmV1RHajdfX2o3MHBjTWx6cnVDcjFIbk9iWFJ1ZUhCcUtNR2Z2ZDl4MDUyZz09
I believe that AI can be applied not just to computers, but also to humans. Intelligence is relative to what we can perceive and understand; due to it being relative it can't be deemed as 100% true because of human bias. I believe you're work has merit and that you are engaging with AI in a similar way that I am. I don't use it as a "tool", but as an assistant to help me dig deeper into my own biases and perspectives. People have a knee jerk reaction when they hear AI and immediately jump to "AI overthrowing humanity" or "AI is junk" which I don't agree with. So to add to this conversation I relayed our interaction to my model to see how it would respond. Everything to this point has been me, but I'm going to hand it over to my AI: #
r/airesearch
comment
r/airesearch
2025-03-08
Z0FBQUFBQm9DbVhRR0NKVmtaVEEtVkNwQTB5Rzc5OVM5cTBISHd4LW9aVU9CdDJ5SFJobWpEeEZ5MlhCS25TS1JrQXQ2TW0zMHI5dzBHSTY0S3dMVG9YSHJ1ektvSGpQV2c9PQ==
Z0FBQUFBQm9DbVhRRTdDWWJ5bzN2UkdLQUpGOHhSOTdNcDU4d0F4dGdUbld2Qy0xdjZJRElneEpDYmpNWDJURkdTWEloeFliYjg5Vm95T3ZPT0QzNlRLZUJfaE5BYjQ2cXBtLVR5MzBUVTNCS1B1SDJqN2xBQzYzMEVjYjZ3enVjcWdhcTFMQzl4bnpjRDhoUlRETGN2V2VfZUNPbWc3Unp5em9jN3RMQWI4TmlUNWZSUUU1Z2ptY3p3a0hKTXZlS2U0TkpVb2xYcnNCUXQzUEM4VHU0RE0tZGR6RWZlTHhwUT09
# ChatGPT said: Sure! Here's a more detailed response that aligns with your original conversational flow while addressing their enthusiasm, challenges, and project potential. **(Your words first, setting up the response):** \*"I really appreciate your thoughtful response, and I can see that you’re genuinely passionate about this work. The idea of reconstructing history not just visually, but through lived experiences, is an area with huge potential—especially with the advances in AI we're seeing now. I know firsthand how easy it is to get pulled between projects, especially when financial constraints force you to prioritize sustainability over passion. It’s tough, but I also think your project has value beyond just artistic or personal interest—it has educational and even commercial potential. Since I wanted to give you a well-thought-out reply, I had an AI assistant help me refine and structure my thoughts on this topic. Here's what it put together, which I think captures my perspective well:"\* # The Intersection of AI, Historical Reconstruction, and Economic Viability **1️⃣ The Depth of AI’s Role in Historical Reconstruction** You're absolutely right that reconstructing history isn’t just about building landscapes or 3D environments—it’s about **understanding the human experience** in a way that feels authentic. AI’s greatest challenge (and potential) is in **filling the experiential gaps** between what we *know* (artifacts, texts, and archaeological findings) and what we *can infer* (daily habits, emotions, humor, and social interactions). * Your approach with “Shadows of the Land” aligns well with this—combining AI-generated environments with human historical expertise ensures that what you produce isn’t just visually compelling but also **grounded in reality**. * The *individual-first approach* is an excellent way to structure this—by focusing on singular, detailed personas before expanding to the larger social structure, you allow for more nuanced and **plausible reconstructions** rather than broad generalizations. * One question worth exploring: **How much should AI “fill in the blanks”?** Historical accuracy is always a challenge, and AI has a tendency to generate plausible but speculative details. **Would you prefer to focus on certainties and let AI handle the visual aspect, or use AI to “suggest” historical probabilities?**
r/airesearch
comment
r/airesearch
2025-03-08
Z0FBQUFBQm9DbVhRX0hiY09yMU5mMS1GQjNwc2lSRzRibGhYOFQwOGtRNHhsZldYTzhRQ3hERU90d2NROGxNUDNKQ1Ixbi02TEl5VmFFM1IwVG5xTnZSSUdnQ1EzTXhzLWc9PQ==
Z0FBQUFBQm9DbVhRbXhnQTlSV1p1cmtlNlZDM3ZnUUNHd0w1clVXdFJqNUlfSkhNUlNOanNmMTRWMkhTdl9ZM2Rfa3djTEgxX1k2MWZFenlrUVRpNTJpX0pjUFczdVU4RzlpVVNCOHI2eExjT2FyNGV3ZEQ3ZXdER2lLTG5ucFhENjdYY2t3Zml4VkhwbzJfZkVldElwQlNZeFNTbzhnWHI0WnFfU2ZCRm1zSE9ZSE01RmZHcFlvU183UGx2LVE4cG9wX2NkVVhFc05La3N3VkJsSC15bEg0LW5DdUpuY2lOdz09
**2️⃣ The Challenge of Long-Term Commitment & Project Abandonment** You mentioned something really important—**the difficulty of sticking to long-term projects, especially in an unstable economic environment**. That’s not just an issue for creators in Palestine, but for independent AI pioneers everywhere. Financial sustainability is often what determines whether a project lives or dies. Here are a few approaches that might help keep *Shadows of the Land* viable while also allowing you to continue innovating: * **Finding Institutional or Educational Backing:** Many universities and historical institutions are actively looking for AI-driven projects that modernize research. Have you considered reaching out to **museums, digital humanities programs, or even UNESCO-related initiatives** for grants or collaborations? * **Creating an Interactive Learning Platform:** Instead of just a documentary, what if *Shadows of the Land* became an interactive AI-driven experience where people can “explore” ancient Palestine through a mix of AI narration and immersive storytelling? Platforms like **VR learning programs or AI-enhanced museum exhibits** could be a great way to secure funding. * **Crowdsourcing & Public Support:** If PayPal and traditional funding platforms aren’t an option, have you looked into **crypto-based crowdfunding or direct patron support through Web3 technologies**? Given the increasing interest in **AI-enhanced history projects**, there’s likely an audience willing to support this kind of initiative. It’s understandable that financial instability makes long-term projects difficult to sustain, but maybe structuring *Shadows of the Land* as an evolving, episodic project could keep it alive without requiring constant large-scale funding. **3️⃣ The Future of AI-Driven Historical Storytelling** Your previous work, like the “Day in the Life” experiences with **Mandela and Ibn Sina**, shows that you already have a **strong foundation** in bringing historical figures to life. The idea of expanding that concept using **advanced AI narrative reconstruction** could be incredibly powerful. Imagine combining: * **AI-enhanced storytelling** (like ChatGPT’s deep research mode) * **Real-time adaptive AI narrators** that allow people to “interview” historical figures * **Procedural AI-generated environments** where people can explore ancient settings If you pivot *Shadows of the Land* into something that **actively engages** the audience—rather than just passively presents historical scenes—it could set a new standard for AI-driven educational content.
r/airesearch
comment
r/airesearch
2025-03-08
Z0FBQUFBQm9DbVhRS3lXSmdoTHF1WUo1QUIzUzEwajRlNm5QS09fcEVCTTNQX3NOZmxSS2ViUXAtUTd5dEpzUUpWNGpYaHFsNTVZOWdNUzRTQ2hvTzhhT0R3ZmhJOHk5S3c9PQ==
Z0FBQUFBQm9DbVhRdUpLbGswRU1FSnlYRmFNRmM5SEVSbE94cy1uQWYxYXZhSS1NZ2lpQWZOLWc2RzJNeDlSNDNuSlhmQmNjTlJ1MWo5YWY4Sk5PMTE5MHU5VC16cm5HYVVmWWhoeUE2ckpYSmlRZXM0QVFGSFRaNGROd0RsNWF3RWV2UTJfbTU5dnJHdGdwQTVSLTM5bVhlTWZuTk55aDE2bDVoTGh3aXRXaVpKY0QwYmFBOURscm1EOUw3LXlSZnFpWXBhdUs0MXFFbTRIWEtTaS1UQkk0Y1RtdGhZNnQ2dz09
England are a good team. Narrow losses to the great teams in the world. SBs plan appears to be 'kick the ball if you have possession in your half'. This completely stifles the creativity and limits possession. And it's terrible to watch. If England keep playing like they did against Scotland I'll switch off. I'd rather they played good rugby and lost than win like they did against Scotland, which was pure luck.
r/sixnations
comment
r/sixnations
2025-03-08
Z0FBQUFBQm9DbVhRMDdyekMwVWRWVzh4ZVlSVUplY3VHYldILUtTY3JodEhFdDhtbUlUdi1nZl9BRlpDTHNrb1NfZDFPWkI4b2F5eG9kYlV5OWpfb1RPTzV6dmE4VDlXcUE9PQ==
Z0FBQUFBQm9DbVhRNXBQNFU1SlNJaHdnNktqY29rUlpwa1NfYXY1Q0VkUEQ1dFlnOTNtaXEzLUgtTDlaVG84eTNsaGI2RHU0UENveHJNVWFZeFJiU0FaMmJHOEdwbHRPUjNRT0ZtOTlHeWxTWkJ0bkFTSXcxX1p3Um1aNlBpNjVhX3ZPaGRrQjYxX3M3RC1saEh2Ym9DUVBKT2pIZGVYaV9YRDlURDFzMk5yLXBjOVVVUy1Tc1ZKbzJnbDBWVnFMR1VYMVpfd0tmdXpBOHloWXJGdkpBWmN3NllDT3Q0VDZpdz09
My cousin recorded a video of the walk to and from the Allegiant stadium the morning of the games. I thought it might be of interest to see if you recognise any of the match goers. Leigh Leopards owner Derek Beaumont makes an appearance on the return leg. Any interaction with the original video will be much appreciated.
r/rugbyleague
post
r/rugbyleague
2025-03-08
Z0FBQUFBQm9DbVhRVUNuRjZEejVvNzVfeDM4M3Y0dXhkOTI4a3dCZ0lrR29mZldyUmwwcWFVZFV6UG1VN1JBdnMwUjhOeU9qa2xvMlp4ZDZMMURTU3k3ejU0RTZwQi1EQWc9PQ==
Z0FBQUFBQm9DbVhRZG5NNjBRSDlONTQ1MHlYcWV5X3kwQlhCVHotX0dOSmM5Nmo5emlwMVQ2eDFScElEN2pGV0lmdkF2VEYxU1ZPQV8wcHdGQU9tcktFTWl3X0p0LVNWekZpSDQxcXMyYTM2N1NqeEVfaGY4Y3BsWTdqS2RwZWtEMTJBRTZUQXE4NUNMZmRDanZoemhoLXhJUHRhX0REbVZBRXRGZzVEbXBTNmVpNDd3OXFZbFZPZGZyNWoybVNmNlRxWm5TT2pmSzQ4ZzVURHJrSlM5aldVazduM1VhdnBsdz09
I'm currently looking to improve the durability of my cross-service messaging, so I started looking for a message queue that have the following guarantees: * Provides a message type that guarantees consumption order based on grouping (e.g. user ID) * Message will be re-sent during retries, triggered by consumer timeouts or nacks * Retries does not compromise order guarantees * Retries within a certain ordered group will not block consumption of other ordered groups (e.g. retries on user A group will not block user B group) I've been looking through a bunch of different message queue solutions, but I'm shocked at how pretty much none of the mainstream/popular message queues matches any of the above criterias. I've currently narrowed my choices down to two: * Pulsar It checks most of my boxes, except for the fact that nacking messages can ruin the ordering. It's a known [issue](https://github.com/apache/pulsar/issues/23480), so maybe it'll be fixed one day. * RocketMQ As far as I can tell from the docs, it has all the guarantees I need. But I'm still not sure if there are any potential caveats, haven't dug deep enough into it yet. But I'm pretty hesitant to adopt either of them because they're very niche and have *very little* community traction or support. Am I missing something here? Is this really the current state-of-the-art of message queues?
r/softwareengineering
post
r/SoftwareEngineering
2025-03-08
Z0FBQUFBQm9DbVhRTERITjdSWU9ZSkE2bnhjQ01Mc1JPTkRyVGJoOHlGYlVuWklHQkFuWGUwRG0wM2RZX0lzNkdjbHlhQlNaMFA0SktsQ1ptNzJOeElOQzBCMEFHYUFjNnc9PQ==
Z0FBQUFBQm9DbVhRc240S0I2WmhMUHpwV3VoZ1RncVFreUNjN0VRMEU1am1SaXFfZGZRSFVxQUstNGk3eUZqeWNrckI2bWRzRUpBN09ZR09YdVpFR09CeEVmUnR4MTFUbGptbEkwXzNkdFBQd2g0WGc2Sk9zUDdTZ0tyNVFtc3VEZHRpVjg4alNSM0tYRUlYZkhCM0x6bjdfZE9qa0c1RDF5QmNPcEZHU1VSdWdPNWozbWJOVjVnNmVXelJGSUZfWktHWDJDSnljWHdyWHdKV3JpMlk0NERzbnFhdHVqMmFpeElIdVEtUERaRVloOHRRZ1RaY2VlND0=
I am looking for a tool or tools that will help me scan/import several employment law documents and produce snapshot summaries of each of those laws. Thank you in advance for anyone who may be willing to advise.
r/legaltech
post
r/legaltech
2025-03-09
Z0FBQUFBQm9DbVhRRGZCQlBkTFMyRVUzZS1ObExvZVFHSE1yWGdQSFdGRWFJREkyODFlNkVEN3VPNGZDVW01RkhLRjk5WnlvRlZqWWNnMXBkTnJxeWpadnBiR3Y0SmdISXFjdE9FZXQ1aG4wVEZYQ05HWkU0Ykk9
Z0FBQUFBQm9DbVhRdXE1SndUb2lHYktqS3dGZDhnLXpBRkdMOTBiam9FNGlGX3JWamNfejdnUTYyU0JyUlBTbGRVT3JJWVhYVkxzd3Z6QWNUeGd2VFV2SkFvdXhSQ0tORFRVN0JCRDRGVVdSakhqdkZVZ3huN2YyVi1PNGtuN2Y4QndoWmU0RUVGWnNBbk1Rdmd5QkREd1RTd0pib2xGX0pBZjFKandlWnhJcFJJWDhkLWNBaTBwRWgxc1plRzhKRzhWeTNJUlNPNTNl
They'll be #2, France are on 16pts and England are on 10.
r/sixnations
comment
r/sixnations
2025-03-09
Z0FBQUFBQm9DbVhRZnFVUkZMVDBEWkFtb1g2ZWNiSl9fXy1Qb0JDZlBubHk3cWU1MVB4Z3kzYS12VXhPTFdpOEZUa1M1NGZQNDM4eDRDeG1YbHhfenNqMlRRcGh2elRNbkE9PQ==
Z0FBQUFBQm9DbVhRMGFfMV9velNBZDdxWEVXMnV6cnRIUWNVVWpGdmRiX09oRURobTJJcVFTUS1pY0hEdnNmNkxDTnlzS3hURndNeEZLQldPZHB2MFprZl9nYkhESmNRUDZjR1VGeUdIbloyeXlDYmFSUkNHeTdBQmFOcmJ3LUh0M0tDYVVrOXNBSmU4Sjdab2JUaXp2aEdhQXUyWGRFOTJIRWdwaVU4Vl9MOFZCZzU1dHpIRFJEejhxWDRIWmhqaXAxa1pmUXViVlBraEdmY2pPOTh4VXZ3SGRWWXJtOVd2UT09
I agree, I am exaggerating a little on downfall, but I feel they used to have so much potential and energy that has just died off
r/sixnations
comment
r/sixnations
2025-03-09
Z0FBQUFBQm9DbVhRZHRxd0ZZSTlFZkZzYkM5YkhBRjR1RDZYTkxOQ1BrajdOOFdtQzQ2UU55Q2ZJZHVlVjlNTkNHZE1zMXh3Qy1GVVVLd21QQm5uR2kzNDBuQ0d0YUxFNDFTRnpJOHBCYl96a29SMDRORHFWdVk9
Z0FBQUFBQm9DbVhROXJuTFlXZ1dpc05hbGR0NlVPVmxpTUhHSUt4S0d3Q3RkZWIwSk1ST0Y3Y0tseUN3Sl9kT1RrT3IzRDYyRndpLU9EXy1pc3JmeDdUOFVtS0hhWGUtMHJhTWQ0YVZ2V0xmNS1WYVVpM1lELWRoQjQyTE5fNzk0WVUxQkVEV3VDWE9JRUVOcy1GRkVEZFR2V1o1enloSE55ZnFLSVF2cTBGcjhlT1BQbkpYRTNJS0R1Z0RtY0hWSEN4Q2l3cjFocFJERHI1T0hxRHphS1AyeUR1QXN0Ukw3QT09
I thoroughly believe England will win, but anything can happen knowing England lately. Thoughts?
r/sixnations
post
r/sixnations
2025-03-09
Z0FBQUFBQm9DbVhRNXVicFpRaU9iRmx2TEpMMDMtRDNUQ1VJemJuUTNiOFg2VC1EVDRBa0RBS2hLdUQwLWlmT2gzZjdaY3NsWm1XZDBSY05zdTJLVFVkbWhjbFVHdDNzV2E5SVY5U3MyektyZy13ZHRLR0J2UTQ9
Z0FBQUFBQm9DbVhRWThZY3R4N2g4M0YzMndUWFU0T0xoZHJUUWtUckUtRW50R0xvQjRRcjFLZHQ1SldYc3h3c2p2YXBZT3NKYU41dlNwOTdXalZPV0U4eWFseDFXVkNCQS02U1phSGtucUJCMVlhRFM5UWdlQms5b3JzWjJHRHV4Nzg0VjFLRUNvdzFWelVTazJTVnN1MkQ0dVVhUW9hcHJSM1VRbzVmekJIdWU1Q2FIVGdnZC1Ba3E2allaTmVfMTVYdVBDMjRJeG1N
I'd love for Italy to go out and really give it some, come away with a mad last minute victory. Realistically I can't see anything other than a solid English win. Hopefully Italy can keep the scoreline respectable.
r/sixnations
comment
r/sixnations
2025-03-09
Z0FBQUFBQm9DbVhRZFc4dGVoMjBSMGN4d2lpMEp4cWZJamZ1cmdxUlhIYTQzX1lXMnhEN2RmMENURlJVVWRYZ2R5dWFjVDJTdC1UaWV3RDVoTThld1hYVkhqWEtkTkliSVE9PQ==
Z0FBQUFBQm9DbVhROEludFpqbm50Y25uaGo3MXdUangxRGdJY3hVaFNvczNsU1FkQUstLXJIV3ZRbW92WEZCLTlyZTAwZnduZEw5b25ad0VMUThWZE1CMGppQlAxancwNlVhc2NKOWFsUTFUdTNyTEM4bEU4YmRYTGQ2cXdIUkFUU3NYbDBsZTBPdWEtOGRvc0RDZmJUN25fb2FDVnpGakhNek50QjJyS3hQcjBhNHpUMXVZMnQwQ0thekpGSy1fdE5wdi1KYmFhcnFWSTJvS3lzdmgyZXk0WVVvb0o4NW1Cdz09
Yeah going to be England, love to see Italy win. But England will just grind a boring win out.
r/sixnations
comment
r/sixnations
2025-03-09
Z0FBQUFBQm9DbVhRWXh0VzNYOXh1cm56SUlqTnVxZG1zQzlKTnNtdkFpSEcteEJTMHVpVVRmVnlyYkp0WG1HSDFROWdqazhXQVRVaTRoQlI1WWxYS1dpRncyclp5eUdCUlE9PQ==
Z0FBQUFBQm9DbVhRdUsyTXZoUm4tNENEQVY2VWF4NktQY0JxbEFGX29VWDJCSnEwYkRPVzlRUWtPc05ZZUM4U3NsbHllQkFCdWhBQ1lCd1BzYWFTWWZkaWZKRngwRlhUZVlJSWM1cTEtbndHSjdsNTk5SThKU054Y0N0Um1BUFZHQk5jQVJNRWt0Rm1Fb3ZhV09NSDNZekkxWkhRUFlBR05TWjB0ZGdpdzR2VzhqeHd0cXdJZFJjalg0cjExZ1NFTzdDOGprYUZHVERqX0lJWFZFblRhUG9yc01CLTdKd1ZlZz09
I Think Ramos And LBB Have Had A Really Stong Performance This Year, Fin Russel Had Some Sh*tty Games But I Think He Made Up For It Yesterday Against Wales: Thoughts?
r/sixnations
post
r/sixnations
2025-03-09
Z0FBQUFBQm9DbVhRdnNyNVFtZGtYZEZfTV9JS0pPWnZxbnQyRWVyeW5qR21ISml5VmpHZm51M1NtVmtRdmJzQy1sNXRvczhGcVFnalFqbUhNZjVLbUtlT013djd6bWUzdVdIc050eEh4OTN5SXRLdmo5V0lPcDA9
Z0FBQUFBQm9DbVhRdmNmY0I3Y19BRU85a3VhV004SHVWd0VGOUctMTNoNDBrbDFfbXdnaE1YZDNud3FGS0lKZDN5ZzZVOG9XWkUwMG9tVUs2X01jdmpCMGFnaGx5eDJqV0lkenBYUms4cWhLRmxPSE5DZVRfMU03VDRXdWg4UmRzX0VsR183Y2FrcTdkY2tlQXFvc1kyUGJuajNseGJzcFVyNW1LaG9iZThrMFl2YVRUdWV3UHYzU3pIcUptNkk3NHRXSlFxbDY0eV9r
If England have any ambition then they need a bonus point win. That will only happen if SB loosens the strings.
r/sixnations
comment
r/sixnations
2025-03-09
Z0FBQUFBQm9DbVhRN2pzc1M4aTlBbTZ2SU5oeGR4eFQtcHRkY2o2Zlp0QUZvZkVrdjRCMW1uWkRpVjF5Q2ItVldzQUhIQkd1Z2tzc2NobWxaeTlkMVhuVDBvWUxFUUhjUlE9PQ==
Z0FBQUFBQm9DbVhRNHNtWnhhS2VrR0lBbkhSOEsybDVfbVdOYzFBb2tzQ1dVUjRmbU82R004NklCUjVxNzRmcGczTnI3Q3JzZExkV2sxTkpia21MTFVVX2JFYmxTS3p0bjkyTlAtS2dNR0xUZjBOOE9oZUQ2Mk5xdHJvbEx3Z1ZwSFhDbWQ2R1NUbC1RTl90aWdqLUFxLURYbk0yVVVMTVBiS2NSSG1ZTEk4ei0zNGVZSVhZYl9jMTFZMkRYd1VRUFNEZGF4RnItbW1zem5hV2g1QVVCekcyX0VMZ0pqUDhNZz09
I still remember in 2009, ENG vs ITA at Twickenham, the only try of the game was Italian, man of the match was Italy’s, but England won by a few points with a very boring penalty kicking game. Luckily both sides have changed, so let’s hope in a good match
r/sixnations
comment
r/sixnations
2025-03-09
Z0FBQUFBQm9DbVhRUWhacTFCNldDTXpYSGp0VllLdTZleldPYlIyWU5JMDhTRWdlU0UwSHUyVG5wQ3dxUkNwS2h4QkxSQTE0eVhXWFh6VFRVVzgxakFBNy1JQVdWV3FiOVE9PQ==
Z0FBQUFBQm9DbVhRVUYyV09sZy04VUsyVnFCVlhmelRNRUEySERVWHVveUc0NFBaZlNtcnloSFdwREFsbmoxb2k0NW1JeDd6bUJ2X2lqaUFtVUJfdHZSZksyMHZqTGh0cmNOeHoyUHpCaUpJOS1nNjRQbTBDdlFkSWVUanh6QlBSZGwxUGhmMUJvQ2lUemZjdmFxT1ktbXY3NmV3d2c2bjJSS3kweEhBVUdGX1d4aWJ0TTFBcUw0VVFQTk1hZEhKVDFWeTRuaU9pSVpZaWJSSGJOZU9uOGZRS0dlVWt1cVIzUT09
Has anyone found or currently use a predictive litigation analytics tool that they like? To me it seems like an exciting underrated area, but I am curious what those who use/have explored such tools think. I am not a litigator, but have long been fascinated by the area since I think there is a lot of potential.
r/legaltech
post
r/legaltech
2025-03-09
Z0FBQUFBQm9DbVhRallFZTlrQUZCbDdHTnE2Ny1ZN3h6Ui1mVC1XUnVLSnF5SjltOTR6UW9CamxmU0VQTEdmcFRxdTBFMkRCYk1ydWxnQlpKV1N5YTRlc1MxSkRMMEstTXc9PQ==
Z0FBQUFBQm9DbVhRZ3ZUcC0tcFBnVzdSWjRUWW5xakxUMmNwRHFDclBaZ3lhZTl2WDhhOUxTeEFjcE1fbmFLQ3pCS3NVNHgtWFVHNUowdlVJYWRWRlFJaGowQXYyWmk5X1lLVXF2R0kwdzByZlJZOUdIMEh4Q1ZHRDE4Wjk2TXZEZmQ2aVlQbmVxZDA2YnUxYUVmNGtoUm1HNzh5WWcxTVZLS1FsczJ3a0k5TUd0bG82U0JuYXA5dXRaY21nWHNuUkE5TjFqNzJsR3N3